J.P. Morgan Payments has partnered with Mirakl to build an infrastructure for agentic commerce in an effort to radially transform the traditional payment rail.
But as autonomous agents move from browsing to buying, they introduce some risks: processing untrusted inputs, accessing sensitive data, and acting with external authority.
In this technical deep dive, Mike Lozanoff, global head of merchant services at J.P. Morgan Payments, explains how the bank is retooling its fraud engines and identity protocols to solve the delegated authority problem.
He acknowledges that for all the talk of AI agents radically simplifying our lives, the handoff from a helpful chatbot to a financial representative can be messy.
We discussed the shift from securing cards to securing "agent identity," why product data is the new friction point for merchants, and how the bank is preparing for a world of high-velocity, automated micro-transactions.
The interview has been edited for brevity and clarity.
This Week in Fintech: J.P. Morgan has recently highlighted the "lethal trifecta" of agentic AI: processing untrusted inputs, accessing sensitive data, and having the authority to act externally. How does the J.P. Morgan/Mirakl infrastructure manage the 'delegated authority' problem? Specifically, how are you ensuring that an agent's 'authority to act' is cryptographically bound to specific user-defined spending limits and merchant-approved product categories?
Lozanoff: We think of delegated authority as a governance and control item. As agentic commerce scales, it will require trust, transparency, and clear value for merchants and consumers, including visibility into what agents are doing and trust that it’s within the boundaries consumers set.
In practice, that means clear consumer consent up front, merchant-defined guardrails on what can be sold and under what conditions, and enforceable limits (e.g., amount, frequency, time bounds) tied to the specific transaction.
Bank-grade risk management should also be built into every payment. Tokenization and fraud controls create another enforcement layer, ensuring that when an agent acts, the transaction is safe and secure.
We're also actively tracking and, where appropriate, engaging with emerging industry standards for agent identity, verifiable authorization and intent, and policy-bound payment credentials. Adoption is still early, so specifics are not finalized, but we're designing our controls and integrations to align with these standards as they mature and gain merchant and ecosystem uptake.
Tokenization is usually about card data, but in agentic commerce, it’s about identity. Traditional tokenization secures the payment instrument, but agentic commerce requires securing the agent’s identity. Is J.P. Morgan developing a new 'Agent Token' that acts as a verifiable credential for the AI itself, separate from the human user’s identity?
The industry is moving from securing just the payment instrument, like a card, to securing the identity and authorization context. When more of the shopping experience is delegated to agents, who or what authorized the purchase becomes just as important as how it is paid for.
We're prioritizing interoperability and evaluating approaches to ensure that an agent can act only with explicit user consent and in accordance with merchant-defined policies. In addition, we’re applying bank-grade risk controls.
When an agent (like Mirakl Nexus) talks to a payment rail (J.P. Morgan), the traditional "browser fingerprinting" or "CVV check" becomes obsolete. How is the fraud engine being retooled for agent-to-agent interactions? Are we moving toward a 'continuous authentication' model where the payment is authorized based on the agent's behavior during the discovery phase managed by Mirakl?
As agentic commerce evolves from discovery to execution, traditional consumer browser behavior will happen less. The path forward is stronger, more consistent frameworks and standards so agents can interact predictably with checkout and payment systems, and so merchants can have visibility into agent-driven activity.
In this model, risk signals shift away from consumer browser artifacts toward authenticated agent identity and authorization context. Instead of relying on Card Verification Value (CVV) and fingerprints, we evaluate whether the agent is known, permitted, and acting within policy. Risk decisions also become more continuous: we are building toward assessing signals and applying controls across the full session from discovery, cart building, checkout, and post-authorization monitoring. Our goal is to continuously evaluate whether agent behavior matches expected patterns. That’s a meaningful change in how fraud engines need to work, and it’s one we’re actively building toward.
The industry is currently grappling with how different agents talk to different stores (e.g., the Model Context Protocol). J.P. Morgan has emphasized that the differentiator won’t be AI but interoperability. To what extent is this Mirakl partnership built on open protocols (like MCP or specialized agentic payment standards) versus a proprietary J.P. Morgan rail?
The industry will need to align on standard protocols for how agents access product catalogs, inventory, pricing, and post-sale data, and how they interact consistently with checkout and payment systems. Common frameworks developed by industry bodies and standards organizations will help with this.
The long-term differentiator won’t be any single AI model, but the ability to operate across an open, interoperable ecosystem where agents, merchants and payment providers can interact seamlessly across platforms.
Agentic commerce drastically reduces the 'time to transaction.' How does the J.P. Morgan Payments infrastructure handle the potential for massive, high-velocity bursts of micro-transactions that agents might trigger during automated inventory rebalancing or price-matching events?
In agentic commerce, the key is pairing throughput with controls.
We design our systems to handle massive transaction spikes with resilient processing and capacity management. We also apply policy guardrails—like spending limits, risk-based monitoring, and rate limiting if required—so automation can scale in a safe and controlled way.
As a global, regulated entity, our focus lies in building infrastructure that allows clients to benefit from speed and automation, while maintaining the same level of trust and control they expect from traditional payment environments.
For the merchants in the current closed beta, what has been the biggest friction point in the handoff between Mirakl's product discovery layer and J.P. Morgan's execution layer? Is it identity verification, or is it the agents' inability to handle 'edge case' checkout logic (like complex shipping or tax calculations)?
A key starting point for merchants is having clean, accessible, rich product data. If product data is incomplete or not structured for agent discovery, that affects whether an agent can find a product or complete a transaction accurately and confidently.
Beyond product data, there are open questions the whole industry is still working through, including: what does “consumer consent” mean when a human isn't present in the flow, and what happens when an agent misinterprets a prompt? Both of those create real fiction at the discovery-to-execution handoff, and they don’t have clean answers yet.
You recently stated that the differentiator in this era is governance, not AI. How do you "police" the agents?
As J.P. Morgan Payments partners with Mirakl to build the infrastructure for agentic commerce, the traditional payment rail is undergoing a radical transformationAutonomous shopping can’t scale without governance. We expect monitoring and management of agentic transactions to come from common frameworks: transparency into what agents are doing and consistent interaction patterns across participants, alongside agreed data-use and consent standards.
Individual solutions don’t solve this at the ecosystem scale. Common frameworks that everyone participates in, with clear transparency and consistent interaction patterns, will be critical to enable safe participation and scalable trust across the ecosystem.


