LINK UPDATE

Introducing Scout — a new module that turns market intelligence into go-to-market execution

Introducing Scout

Learn More

Article

Building Trust in Agentic Commerce

10/08/25
Filip Verley
Filip Verley
Chief Innovation Officer

Would you let an AI agent spend your company’s quarterly budget, no questions asked? Most leaders I talk to aren’t there yet. Our research shows that only 8% of organizations are using AI agents in the long term, and the gap isn’t due to a lack of awareness. It’s trust.

If agentic AI is going to matter in e-commerce, we need guardrails that make it safe, compliant, and worth the operational risk. That is where authentication, authorization, and verification come in. Think identity, boundaries, and proof. Until teams can check those boxes with confidence, adoption will stall.

What is an AI agent, and why does it matter in e-commerce

At its simplest, an AI agent is software that can act on instructions without waiting for every step of human input. Instead of a static chatbot or recommendation engine, an agent can take context, make a decision, and carry out an action.

In e-commerce, that could mean:

  • Verifying a buyer’s identity before an agent executes a purchase on their behalf
  • Allowing an agent to issue refunds up to a set limit, but requiring human approval beyond that threshold
  • Confirming that an AI-driven order or promotion matches both customer intent and compliance rules before it goes live

The upside is clear: faster processes, lower manual overhead, and customer experiences that feel effortless. But the risk is just as clear. If an agent acts under the wrong identity, oversteps its boundaries, or produces outcomes that don’t match user intent, the impact is immediately evident in increased fraud losses, compliance failures, or customer churn.

That’s why the industry is focusing on three pillars: authentication, authorization, and verification. Without them, agentic commerce cannot scale.

The adoption gap

Analysts project autonomous agents will grow to $70B+ by 2030. Buyers want speed, automation, and scale, but customers are not fully on board. In fact, only 24% of consumers say they are comfortable letting AI complete a purchase on their own.

That consumer hesitation is the critical signal. Ship agentic commerce without shipping trust, and you don’t just risk adoption, you risk chargebacks, brand erosion, and an internal rollback before your pilot even scales.

What’s broken today

Three realities keep coming up in my conversations with product, fraud, and risk leaders:

The regulatory lens makes this sharper. Under the new EU AI Act, autonomous systems are often treated as high-risk, requiring transparency, human oversight, and auditability. In the U.S., proposals like the Algorithmic Accountability Act and state laws such as the Colorado AI Act point in the same direction—demanding explainability, bias testing, and risk assessments. For buyers, that means security measures are not only best practice but a growing compliance requirement.

When I see this pattern, I look for the missing scaffolding. It is almost always the same three blanks: who is the agent, what can it do, and did it do the right thing.

The guardrails that matter

If you are evaluating solutions, anchor on these three categories. This is the difference between a flashy demo and something you can put in production.

Authentication

Prove the agent’s identity before you let it act. That means credentials for agents, not just users. It means attestation, issuance, rotation, and revocation. It means non-repudiation, so you can tie a transaction to a specific agent and key.

 What to look for:

  • strong, verifiable agent identities and credentials
  • support for attestation, key management, rotation, and kill switches
  • logs that let you prove who initiated what, and when

Authorization

Set boundaries that are understood by both machines and auditors. Map policies to budgets, scopes, merchants, SKUs, and risk thresholds. Keep it explainable so a human can reason about the blast radius.

What to look for:

  • policy engines that accommodate granular scopes and spend limits
  • runtime constraints, approvals, and step-up controls
  • simulation and sandboxes to test policies before they go live

Verification

Trust but verify. Confirm that outcomes align to user intent, compliance, and business rules. You need evidence that holds up in a post-incident review.

Verification isn’t just operational hygiene. Under privacy rules like GDPR Article 22, individuals have a right to safeguards when automated systems make decisions about them. That means the ability to explain, evidence, and roll back agent actions is not optional.

What to look for:

  • transparent audit trails and readable explanations
  • outcome verification against explicit user directives
  • real-time anomaly detection and rollback paths

If a vendor cannot demonstrate these three pillars working together, you are buying a future incident.

Real-world examples today

Real deployments are still early, but they show what’s possible when trust is built in.

  • ChatGPT Instant Checkout marks one of the first large-scale examples of agentic commerce in production. Powered by the open-source Agentic Commerce Protocol, co-developed with Stripe, it enables users in the U.S. to buy directly from Etsy sellers in chat, with Shopify merchants like Glossier, SKIMS, and Vuori coming next. The article affirms each purchase is authenticated, authorized, and verified through secure payment tokens and explicit user confirmation—demonstrating how agentic AI can act safely within clear trust boundaries.
  • Konvo AI automates ~65% of customer queries for European retailers and converts ~8% of those into purchases, using agents that can both interact with customers and resolve logistics issues.
  • Visa Intelligent Commerce for Agents is building APIs that let AI agents make purchases using tokenized credentials and strong authentication — showing how payment-grade security can extend to autonomous actions.
  • Amazon Bedrock AgentCore Identity provides identity, access control, and credential vaulting for AI agents, giving enterprises the tools to authenticate and authorize agent actions at scale
  • Agent Commerce Kit (ACK-ID) demonstrates how one agent can verify the identity and ownership of another before sensitive interactions, laying the groundwork for peer-to-peer trust in agentic commerce.

These aren’t fully autonomous across all commerce workflows, but they demonstrate that agentic AI can deliver value when authentication, authorization, and verification are in place.

What good looks like in practice

Buyers ask for a checklist. I prefer evaluation cues you can test in a live environment:

  • Accuracy and drift. Does the system maintain performance as the catalog, promotions, and fraud patterns shift?
  • Latency and UX. Do the controls keep decisions fast enough for checkout and service flows?
  • Integration reality. Can this plug into your identity, payments, and risk stack without six months of glue code?
  • Explainability. When an agent takes an action, can a product manager and a compliance lead both understand why?
  • Recourse. If something goes wrong, what can you unwind, how quickly can you roll it back, and what evidence exists to explain the decision to auditors, customers, or regulators?

The strongest teams will treat agent actions like high-risk API calls. Every action is authenticated, every scope is authorized, and every outcome is verified. The tooling makes that visible.

Why this matters right now

It is tempting to wait. The reality is that agentic workflows are already creeping into back-office operations, customer onboarding, support, and payments. Early movers who get trust right will bank the upside: lower manual effort, faster cycle time, and a margin story that survives scrutiny.

The inverse is also true. Ship without safeguards, and you’ll spend the next quarter explaining rollback plans and chargeback spikes. Customers won’t give you the benefit of the doubt. Neither will your CFO.

A buyer’s short list

If you are mapping pilots for Q4 and Q1 2026, here’s a simple way to keep the process grounded:

  1. define the jobs to be done
  2. write the rules first
  3. simulate and stage
  4. measure what matters
  5. keep humans in the loop
  6. regulatory readiness. Confirm vendors can meet requirements for explainability, audit logs, and human oversight under privacy rules.

The road ahead

Agentic commerce is not a future bet. It is a present decision about trust. The winners will separate signal from noise, invest in authentication, authorization, and verification, and scale only when those pillars are real.

At Liminal, we track the vendors and patterns shaping this shift. If you want a deeper dive into how teams are solving these challenges today, we’re bringing together nine providers for a live look at the authentication, authorization, and verification layers behind agentic AI. No pitches, just real solutions built to scale safely.

▶️ Want to know more about it? Watch our Liminal Demo Day: Agentic AI in E-Commerce recording, and explore how leading vendors are tackling this challenge.

My take: The winners won’t be the first to launch AI agents. They’ll be the first to prove their agents can be trusted at scale.

Share this Article