[stock-market-ticker symbols="FB;BABA;AMZN;AXP;AAPL;DBD;EEFT;GTO.AS;ING.PA;MA;MGI;NPSNY;NCR;PYPL;005930.KS;SQ;HO.PA;V;WDI.DE;WU;WP" width="100%" palette="financial-light"]

Know your Agent (KYA): enabling autonomous financial services. The topic will be analysed at Banking 4.0.

22 septembrie 2025

As AI agents begin opening accounts, logging in, and making payments, the traditional identity and fraud stack is collapsing. Discover why “Know Your Agent” (KYA) is the new frontier in digital identity.

an article written by Jelena Hoffart, Director of Identity Value Chain Expansion @ Mastercard


The year 2024 was marked by the rise of the AI agent, and 2025 is gearing up to be the year of the autonomous AI agent, with the launch of OpenAI’s Operator. As noted, Operator requires human supervision for certain tasks, including financial transactions, meaning consumers need to take control to enter payment information, for example. But this is an interim necessary step because we lack the digital identity infrastructure to allow autonomous transactions. Enabling autonomous operations requires defining and implementing a much higher trust threshold across the digisphere than currently exists.

At the same time, many of the widespread fraud signals in use today ensure that trust in the ecosystem will become disrupted and less effective as agents are adopted across the economy. Therefore, the existing identity and fraud stack will need to be fundamentally rewritten for when AI agents, instead of humans, complete account opening, authentication and authorization workflows.

As many singular signals for detecting fraud are disrupted with the emergence of AI agents, it becomes even more crucial to aggregate, contextualize and cross-link across as many different signals as possible to create a more holistic view of the human or bot. And, until the existing identity and fraud stack can adapt to these changes, we could not securely process bot-initiated payments. Solving the bot identity, authentication and authorization problem — or as we coin it, KYA — is a crucial precursor to enabling autonomous payments.

In the following sections, we will explore how each of these workflows could be disrupted by autonomous agents.

Account onboarding

For humans

When a human applies for an account at Bank ABC, the human is asked for their full name, date of birth, address and Social Security Number (SSN) (or other personhood identifier) as part of the mandated regulation of KYC. In many countries, KYC is satisfied through document verification and a liveness check. This process prompts a user to take a photo or video of a government document, which is then matched with a selfie taken by the same user. The bank must make a good-faith effort to ensure the person signing up for the account matches the credentials.

For AI agents

While KYC vendors have built large businesses verifying humans, what happens when a bank needs to verify if a bot is who they say they are acting on behalf of? The question becomes twofold: is this human who they say they are and is this bot permitted to act on this human’s behalf to perform this action? While the first question is solved via standard KYC processes, answering the latter will require an entirely new process to identify and verify agents.

In the short term, agents opening a bank account will still require significant human intervention to complete the process. Incumbent KYC vendors will not need to prove that the bot is authorized to act on a human’s behalf when agents ‘call’ the human to complete the ‘last mile’ interaction. Take document verification, for example. While a bot could access their human’s 1Password account or camera roll and submit a picture of their ID or passport (but only if a human already has one saved and accessible), it will still need to request that the human takes a selfie to complete the liveness verification. This process will become increasingly untenable as it prevents bots from acting completely autonomously and adds more friction to onboarding, which results in lost revenue. While a bot operates 24/7, a human will not respond to a 4 a.m. request to complete KYC. Onboarding methods that require a human in the loop, like document verification with selfie liveness checks, will be deprioritized in the AI agent identity and fraud stack.

Conversely, the preference for AI agents to operate autonomously will accelerate opt-in and usage of identity credentials within wallets for account opening processes. Wallets provide the container for humans to create and then store digital ‘reusable’ identity credentials, which could be used later by their agents without further human intervention. This method is exactly the frictionless account opening process that will be favored in a bot-first world. As we explore in the subsequent sections, wallets could also go one step further and provide useful infrastructure to hold new forms of credentials denoting the bot’s permitted activities such as transaction guidelines.

Account authentication

For humans

In addition to KYC, financial institutions employ many other layers of defense, orchestrating 10+ vendors in their identity and fraud stack to create a probabilistic determination of a human’s authenticity. Once approved for an account, financial institutions want to ensure customers can seamlessly log in and use their account while keeping fraudsters out, especially bots, which are seen as bad actors by default. While primitive authentication methods — usernames/passwords, CAPTCHAs and magic links — are still widely used, sophisticated enterprises use passive authentication methods to reduce friction. These methods include facial recognition and biometrics, behavioral analysis (examines user behavior patterns to detect anomalies), device fingerprinting (identifies and tracks devices based on unique hardware and software characteristics) and location intelligence (uses geographic data to assess the legitimacy of a user’s location to mitigate spoofing).

For AI agents

Existing authentication systems rely on ‘bot detection’ techniques that identify if a human or a bot is interacting with a website. If a bot is detected rather than a human, the action is blocked. To prepare for a world of AI agents, existing authentication vendors will need to evolve from identifying if a bot is detected to determining whether it is a ‘good’ or ‘bad’ bot. Therefore, we expect that many existing methods of authentication will need to adapt or will be deprioritized in the identity and fraud stack of the future.

It is no surprise then that one of the leading human authentication companies, Okta, is future-proofing their core business by launching an authentication offering for AI agents and investing in an emerging AI authentication startup, Anon.

Agent authentication will be an essential infrastructure to facilitate autonomous bot interactions, enabling them to visit sites and log in on our behalf — safely and securely. In our view, this bot authentication layer will yield a generational multi-billion-dollar outcome. While authentication volume already outnumbers onboarding volume, the emergence of AI agents will accelerate this disparity by ten times. While your bot may only open a few bank accounts a year, it will make hundreds of thousands — if not millions — of authentications as it passively roams the Internet to perform tasks on your behalf, initiating many more API calls for authentication than a human would do. This will result in exponential growth in the total addressable market (TAM) for authentication (and an obvious need for guardrails).

Behavioral analysis

AI agents like Operator function on top of existing browser technologies and interact with a web page by typing, clicking and scrolling. The standardization of AI agents’ behaviors will eliminate the anomalies that behavioral vendors look for when analyzing swipe patterns and keystrokes. We think that AI agents will all ‘act’ the same — e.g. clicking and scrolling in a uniform manner — and will not reveal the subtle mannerisms that we see when malicious humans interact with web pages, such as copying and pasting an SSN. Therefore, behavioral analysis may become a less-effective signal when applied to agents. Behavioral signals will be useful to identify when a human is interacting with a site rather than with a bot, as the contrast between erratic human behaviors versus standardized agent behaviors will be exacerbated.

Facial recognition and biometrics

Owing to the rise of deepfake technology that can already evade best-in-class facial recognition and selfie liveness technology, we have already seen cloud-based biometrics fall out of favor (device-based biometrics like FaceID have proved more resistant). We expect this to continue owing to the requirement for autonomous agent authentication. With agents, biometrics turns from a passive authentication measure — one that happens with minimal friction — to an active authentication method requiring agents to ‘call’ the human to complete the ‘last mile’ selfie interaction.

Device intelligence

Device intelligence signals will need to adapt in an AI agent world. When a human operates their phone, device signals can tell that the human is using an iPhone 16 running on iOS 18.4.1, for example. Or, when a human is using their laptop, device signals will show the IP address linked to an Edge 135.0.0 browser and Windows 11 operating system. However, OpenAI’s Operator uses a virtual machine, which makes it difficult to tell if a browser is connected to a real human or not. All bots running on Operator point back to the same Google Chrome browser with a standard IP range in a middle-of-America data center. All agents will likely return standardized device signals as opposed to the unique device attributes we currently glean from humans operating on their personal mobile devices or laptops.

The team at Fingerprint is on the cutting edge of building device signals to detect AI agents. Valentin Vasilyev, Fingerprint CTO, predicts that

[a]s more agent-driven browsers operate in the cloud, running within virtualized environments, automation tools will make AI agents appear identical, leading to an increase in false negatives. To address this, new classification techniques based on statistical anomaly detection and device capability analysis will be required.

Location intelligence

In the last few years, we have seen precision geolocation intelligence emerge as a valuable signal to protect against account takeover. Mobile and laptop devices are an extension of us, and we carry them everywhere. Moreover, every human’s location pattern is unique and, therefore, can be used to create a unique pattern of their behavior such as where they typically reside and travel to and what devices they carry with them. Best-in-class vendors like Incognia can track a human’s location within 10 feet, pinpointing the location to the exact apartment building, floor and unit. However, the location signals collected from agents will be less explanatory, showing only the static location of the data center, and will not create the same location pattern over time as with the movement of the human. We still believe that location intelligence will play a critical role, especially in gig economy use cases where the role of a human delivery or rideshare driver cannot be outsourced to an agent!

Account authorization and payments

The existing financial payments infrastructure is wholly unprepared for when agents can carry out actions and payments on behalf of humans. Think about it this way: If a self-driving car gets in a wreck while in self-driving mode, who is responsible for the cost of the damages? To determine who bears liability, we would logically look to understand the owner of the self-driving car, which is not difficult because we have clear records of ownership via vehicle titles. But if the owner was not behind the wheel when the crash happened, are they liable for the crash? Instead, does the manufacturer of the car bear liability because they enabled the vehicle to be on the road in the first place?

Similarly, who is responsible when a human initiates a chargeback on a $10K vacation that a bot bought? Just like in the scenario of a self-driving car wreck, we would first want to understand who the owner of the bot is. However, without KYA infrastructure, we will not be able to answer which human owns that bot, let alone if that bot was permitted to even spend $10K on anything in the first place. One could easily imagine that without this identity infrastructure, consumers will cleverly claim that their bot ‘hallucinated’, and since the bank allowed the transaction to go through, the bank should repay them (and then it is the bank’s problem to settle with the merchant). Therefore, solving the bot identity, authentication and authorization problem — or as we coin, KYA — is a critical precursor to enabling secure autonomous payments. Without it, there are no guardrails in place.

This is exactly where wallets can be useful infrastructure. Not only can wallets hold a digital passport or ID for bots to use, but they can also hold and store credentials dictating agents’ permitted activities. If a human does not give clear directions to their bot — such as buying a vacation to this location within this cost range on these dates — then the liability for a transaction gone wrong should reside with the human, not with the bank or merchant. If this authorization credential does exist, it will be up to the payment intermediaries to check and confirm the activity before accepting a payment. Wallets already hold payment credentials that a bot could use to transact in the future, so leveraging that same infrastructure to hold identity credentials to establish human and bot authenticity as well as a bot’s permitted activity is a no-brainer.

Noutăți
Stay updated to the impact of emerging technologies in fintech & banking.
Banking 4.0 newsletter - subscribe
Cifra/Declaratia zilei

Dariusz Mazurkiewicz – CEO at BLIK Polish Payment Standard

Banking 4.0 – „how was the experience for you”

To be honest I think that Sinaia, your conference, is much better then Davos.”

Many more interesting quotes in the video below:

Sondaj

In 23 septembrie 2019, BNR a anuntat infiintarea unui Fintech Innovation Hub pentru a sustine inovatia in domeniul serviciilor financiare si de plata. In acest sens, care credeti ca ar trebui sa fie urmatorul pas al bancii centrale?