TT: The AI Impersonator: Runtime Defense Against Machine-Generated Deception with Dmitri Bogatenkov

The Talsecarrow-up-right Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.

While AI offers incredible advancements, it has also become a powerful weapon for bad actors. Traditional security advice, such as looking for typos in scams, is now outdated as generative AI can create perfectly polished deceptions. Today’s apps require a "sixth sense" to combat a new adversary: the AI impersonator.

The Rise of the AI Impersonator

The AI impersonator is a suite of technologies working together to hack human belief rather than just code. This toolkit includes:

  • Generative Text: Creating perfect, believable messages.

  • Voice Synthesis: Replicating a specific voice from just a few seconds of audio.

  • Deepfakes: Generating realistic video.

This threat is fueled by the democratization of advanced AI models, the massive amount of personal data available on social media for tailored attacks, and the fact that mobile devices are now the epicenter of personal identity and finance.

Anatomy of an Attack

A typical AI impersonator attack is hyper-personalized and bypasses traditional security.

  • Contextual Hook: Attackers use AI to scrape personal information, such as recent travel, to craft urgent and believable SMS alerts.

  • The Master Stroke: After a user clicks a malicious link, they receive a call from a fully synthetic AI voice.

  • Social Engineering: The AI, acting as a professional security agent, convinces the user to read back a legitimate one-time password (OTP).

Even if an app's encryption is unbroken, the human remains the entry point.

The Digital Detective: A New Playbook

To counter these fakes, developers must build a "digital detective" inside their applications that follows three steps: gathering clues, connecting dots, and taking action.

Gathering Clues

The detective collects two types of data without compromising privacy:

  • Behavioral Clues: Analyzes unique user rhythms, such as typing patterns and swiping velocity, to detect if a user is acting under duress.

  • Contextual Clues: Checks the "scene of the crime" for runtime signals like rooted devices, screen overlays, SIM swaps, or geolocation anomalies.

Connecting the Dots and Decisive Action

A policy engine evaluates these clues together. For example, if a user adds a new payee while showing low behavioral trust and a geolocation mismatch, the system can trigger a precise, user-friendly intervention like a Biometric FaceID step-up instead of a blunt 24-hour block.

Moving Forward

To protect users against the AI impersonator, organizations should:

  1. Integrate Specialized SDKs: Use tools capable of gathering behavioral and contextual clues.

  2. Design Intelligent Policies: Move away from restrictive, one-size-fits-all blocks toward risk-based interventions.

  3. Audit Security Copy: Ensure microcopy is context-aware to prevent user confusion, such as warning a user that a bank will never ask for an OTP during an active call.

As AI continues to perfect the look of trust, developers must focus on perfecting the real-time "feel" of trust on the device.

Thank you, Dmitry Bogatenkov, for your insightful presentation on defending against machine-generated deception. Your discussion on shifting from simple code security to an intelligent runtime defense through a "digital detective" mindset was especially impactful. We appreciate you sharing your expertise and strategies for gathering behavioral and contextual clues to better protect users from the evolving threat of AI impersonators.

circle-check

Last updated

Was this helpful?