Root detection is a security mechanism used by Android applications to determine if a device has been rooted — meaning the user has obtained superuser (root) access to the operating system.
For apps that handle sensitive data (like banking or enterprise apps), it’s important to know if the device is rooted because a rooted device no longer adheres to the standard security model that Android enforces.
The purpose of root detection is to protect applications and data from the risks associated with rooted devices. If an app detects that the phone is rooted, it may respond by refusing to run or by limiting certain features to safeguard information. For example, many financial apps will stop working or show a warning if they sense the device is rooted. By doing so, the app prevents attackers (or even the user) from exploiting the elevated privileges that come with rooting to compromise the app’s security. In short, root detection is about ensuring the app is running in a trusted environment — if the device is deemed untrusted (rooted), the app can take precautions or block usage to protect sensitive data and operations.
Root detection involves scanning the device to identify markers or modifications that suggest it has been rooted. This might include checking for altered system files, the presence of superuser binaries (like the su binary), or any known indicators of rooting tools such as Magisk or Franco Kernel Manager app. Essentially, it verifies that the device’s operating system is in its intended, secure state.
What Is the Concept of Rooting/Privileged Access and Their Risks?
Rooting an Android device means unlocking its system to gain privileged access, similar to having administrator rights on a computer. This process removes restrictions imposed by manufacturers and operating system developers, allowing users to modify core system files, uninstall pre-installed apps, and install software that normally wouldn’t be permitted.
Many users root their devices to enhance performance, customize the interface, or install specialized applications. However, rooting comes with significant risks. By bypassing Android’s built-in security measures, the device becomes more vulnerable to malware, hacking attempts, and unauthorized access. Malicious apps with root access can manipulate sensitive data or compromise system integrity.
Additionally, rooting may lead to unexpected system instability, causing apps or essential functions to malfunction. It often voids the manufacturer’s warranty and can prevent users from receiving critical security updates, leaving the device exposed to new threats. While rooting provides more control over the device, it also demands a strong understanding of its risks and potential consequences.
Conclusion
Hooking in mobile apps is a powerful technique with two faces: it can be a helpful tool for developers and security researchers, but a dangerous weapon for attackers. In this deep dive, we explored how hooking works and why it poses such a significant threat to mobile app security when used maliciously. We discussed the many risks of hooked apps – from data theft and privacy invasion to fraud and cheating – which make it clear that ignoring this threat is not an option for sensitive apps.
In conclusion, hooking in mobile apps is a deep and important topic. As we’ve dived into its depths, remember that security is an ongoing journey. By staying informed and proactive – whether through regular updates, adopting new defensive technologies, or learning from case studies – you can keep your mobile apps one step ahead of attackers. Hooking and hook detection might sound complex, but with an approachable strategy (and perhaps re-reading this guide as needed!), any developer can start incorporating these protections. Together, through better security practices, we can make the mobile ecosystem safer and more trustworthy for everyone.
Conclusion
Obfuscation is an indispensable tool in the mobile app security arsenal. By making your application's code significantly harder to understand, you deter attackers and protect your intellectual property and sensitive data.
Talsec advocates for a pragmatic approach, emphasizing the crucial role of class name and string obfuscation as fundamental security layers for all sensitive applications. While acknowledging the potential benefits of control-flow obfuscation for specific algorithm protection, we recommend a targeted strategy involving isolating sensitive code in C/C++ and applying specialized obfuscation tools to minimize risks and ensure a robust and stable application.
At Talsec, we are dedicated to providing you with the tools and knowledge necessary to build secure and resilient mobile applications. By understanding the nuances of obfuscation and adopting a layered security products RASP, App Hardening, Malware Detection, AppiCrypt and carefully chosen obfuscation techniques, you can significantly enhance your application's defenses against the ever-evolving threat landscape.
Hook Detection
Learn how hook detection protects mobile apps from runtime code manipulation, blocking tools like Frida and Xposed.
Hook detection on Android and iOS is an essential security measure used to prevent malicious manipulation of app behavior at runtime. Hooking allows attackers to intercept and modify function calls, enabling activities such as bypassing authentication, altering in-app purchases, or extracting sensitive data. On Android, hooking frameworks like Frida, Xposed, and LSPosed are commonly used by attackers to inject and execute custom code. Similarly, on iOS, tools like Cycript and Frida enable runtime manipulation of app functions. To counter these threats, developers implement hook detection by monitoring for suspicious process injections, checking for known hooking libraries, and enforcing runtime integrity verification.
Despite these defenses, attackers continuously refine their evasion techniques to bypass detection mechanisms. For example, they use obfuscation, custom-built hooking tools, or even modify an app's binary to disable security checks. As a result, effective hook detection relies on a multi-layered approach, including runtime code integrity verification, API call monitoring, and heuristic analysis of suspicious behaviors. Additionally, integrating hook detection with other security measures such as jailbreak and root detection, runtime application self-protection (RASP), and app hardening significantly enhances resilience against dynamic attacks. By continuously adapting security strategies, developers can reduce the risk of unauthorized modifications and maintain the integrity of their applications.
What is the Concept of Hooking and Its Security Implications
Hooking in mobile apps is a technique where an external code snippet intercepts and modifies the normal execution of an application at runtime. In simpler terms, hooking lets someone “attach” into an app’s internal functions or APIs, allowing them to see or change what the app is doing without altering the app’s original source code. This can be done using special tools or frameworks that inject code into the running app process. For example, a hooking tool might intercept a login function call to capture your password or change a value in memory before the app uses it.
Hooking is a double-edged sword. On one side, developers and researchers use hooking frameworks for legitimate purposes – debugging, performance monitoring, or testing security. These tools help inspect apps on the fly and can be invaluable for finding bugs. On the other side, malicious actors can exploit hooking to tamper with apps in ways the developers never intended. A hooking framework essentially gives an attacker the power to intercept and modify app behavior at runtime
This means an attacker could read sensitive data in memory, bypass security checks, or alter how the app functions. In short, hooking can turn an otherwise secure app into a vulnerable one if misused.
Security implications: Because hooking enables runtime tampering, its implications are serious. If an attacker successfully hooks into a mobile app, they might gain unauthorized access to user data, trick the app into bypassing critical security measures, or insert new malicious behaviors. Often, performing hooking requires the device to be in a state that allows such deep intervention – for instance, an Android device might be rooted or an iPhone jailbroken to remove the usual restrictions on apps. Such devices are more susceptible to hooking because the operating system’s normal security barriers are lowered. For this reason, many secure apps already warn against or outright block usage on rooted/jailbroken devices. However, sophisticated attackers have found ways to hook some apps even without full device compromise (using virtual environments or clever injection techniques), making hook detection an important consideration on all devices.
Open Content Topics
Content ideas you can turn into articles, videos, and more
Introduction
Introduction to root, jailbreak, and hooking detection in mobile security, explaining how these techniques work, how attackers abuse them, and how developers can safeguard apps.
Mobile security faces constant threats from rooting, jailbreaking, and hooking—techniques that grant deep system access but also expose vulnerabilities. Attackers exploit these methods to bypass protections, steal sensitive data, and manipulate apps. Understanding how to detect and defend against these risks is crucial for developers and security professionals. Dive in to explore how , , and detection safeguard apps against sophisticated threats.
What Are the Security Risks of Rooted Devices?
Rooted devices often face enhanced security risks, primarily because the built-in security layers are weakened or bypassed. These risks include:
1) Increased Vulnerability to Malware
Normally, apps on Android are “sandboxed” (kept separate) and your system files are protected — but rooting breaks these protections. Without them, malicious apps can gain deep access to your system. In fact, if malware runs with root permissions, it can do almost anything — it could delete important files, hijack your settings, or even install hidden programs that persist on your device. Additionally, rooted phones often stop receiving official security updates, so any new vulnerabilities remain unpatched, making infections and attacks even more likely.
What are the Security Risks Associated with Hooked Apps
When an application has been “hooked” by an attacker, a range of security risks emerge. Below are some of the most significant risks associated with hooked apps:
Privacy Violations: A hooked app can betray its user’s privacy. With hooking, an attacker can monitor user interactions and device sensors through the app. They might log keystrokes and touch inputs (acting as a keylogger), or listen to sensor outputs (microphone, GPS, camera) via the app’s own permissions. This means an app you trust (like a messaging or health app) could, once hooked, be turned into a surveillance tool recording your private data and activities.
Application Tampering and Bypassed Security: By using hooks, attackers can modify an app’s behavior on the fly to bypass security checks or disable protections. For instance, a hook might disable a jailbreak detection function so that the app doesn’t realize the device is compromised. Attackers can also turn off features like certificate pinning or encryption, which are meant to secure communication, thereby enabling man-in-the-middle attacks on supposedly secure connections. In essence, any protective measure within the app (root detection, login checks, payment validations) can potentially be overridden if the hook can intercept the right method. This leads to unauthorized actions such as making in-app purchases for free, accessing content without permission, or performing restricted operations.
Why Root Detection Is Critical for Security?
Allowing a rooted device to run a sensitive application is a huge security risk. When a device is rooted, malicious apps or users with knowledge can effectively break out of Android’s security sandbox. They can read or modify data that should be protected, install spyware, or alter app behavior. For applications that deal with confidential information or perform protected actions, this is unacceptable. Below are a few key scenarios highlighting why root detection is so important:
Banking and Financial Apps — Mobile banking and payment apps handle highly sensitive information (account details, authentication data) and perform privileged operations (like transferring money). If such an app runs on a rooted phone, a piece of malware on that device could use root permissions to steal credentials or tamper with transactions. For this reason, most banking apps use root detection and will refuse to run on rooted devices.
This ensures that things like your bank transactions aren’t happening in an environment where another app could be recording your keystrokes or injecting fraudulent behavior.
Enterprise Security (MDM and Corporate Apps) — Companies that allow employees to access work email or confidential data on their phones enforce strict device policies. A rooted device is typically considered “untrusted” in enterprise settings
Which Advanced Detection Methods and Tools Can Enhance Jailbreak Detection?
Basic detection methods can be effective against casual jailbreak users, but as mentioned, advanced users may employ tools to bypass jailbreak detection. For example, tweaks like Liberty Lite or Shadow can intercept and neutralize common detection calls (making jailbreak files invisible to the app, faking fork() results, etc.). To stay ahead, developers and security companies have created more advanced jailbreak detection and protection solutions. Here are some strategies and tools for enhancing jailbreak detection:
Integrity Checks and Anti-Tampering
One advanced approach is to detect if your own app’s code has been modified or if critical functions have been hooked. For instance, you can compute a checksum of your important binary sections in memory and verify it matches the expected value. If a tweak has hooked your functions, the in-memory bytes might differ. While implementing this is quite technical, it raises the bar for attackers. Additionally, employing anti-tamper techniques (like obfuscating the jailbreak check code, detecting if someone is using a debugger to bypass your checks, etc.) falls under advanced methods. These measures make it harder for a jailbreak bypass tweak or an attacker to simply patch out the detection.
Why is Hook Detection Crucial for Mobile App Security?
Hook detection is crucial because it’s often the last line of defense against a sophisticated attacker. Modern mobile apps already employ many security measures – encryption, authentication, secure coding practices, etc. However, if an attacker can hook into an app, they may bypass or undermine all those measures from the inside. Here’s why robust hook detection is so important for mobile app security:
• Protecting Sensitive Data: Apps like mobile banking, payment wallets, healthcare, or enterprise apps deal with highly sensitive user data and transactions. If an attacker manages to hook these apps without being noticed, they could steal data or perform fraudulent transactions invisibly. Hook detection helps ensure that if an attacker is trying to do this, the app will catch it and not simply hand over the keys. In industries like finance and healthcare, failing to detect such intrusion could lead to breaches, regulatory penalties, and loss of user trust.
• Maintaining App Integrity: Even if an app isn’t handling bank details, its integrity is important (think of a game or a social media app). Attackers hooking a game can enable cheats, ruining the fair play for everyone. In social or messaging apps, hooking could lead to eavesdropping on private communications. By implementing hook detection, developers ensure their application’s code and logic aren’t being manipulated behind the scenes. It’s about making sure the app the user is running is the genuine, untampered version of the developer’s code.
How Can Mobile Developers Detect Jailbroken Devices?
Detecting a jailbroken device is an important part of jailbreak protection for apps. Developers have devised various methods to check if the device their app is running on has been compromised. There is no single foolproof indicator, so effective iOS jailbreak detection often combines multiple checks. Below are some common techniques mobile developers use to detect a jailbreak:
Checking for File System Artifacts
Jailbreaking usually leaves behind certain files, directories, or apps that are not present on a normal iOS device. By attempting to locate these, an app can infer a jailbreak. Classic examples include checking for the existence of Cydia or other installer apps on the system. For instance, one can check if the path /Applications/Cydia.app exists, or if directories like /private/var/lib/apt/ (which indicates the presence of the APT package manager used by Cydia) are present.
Best Practices for Implementing Hook Detection
Implementing hook detection effectively requires a multi-pronged and thoughtful approach. Here are some best practices for developers and security teams to keep in mind when building hook-resistant mobile apps:
Defense in Depth: Use multiple detection techniques rather than relying on a single check. For example, combine root/jailbreak detection, integrity checks, and checks for known hooking frameworks. An attacker might bypass one layer (say, hiding root status), but still be caught by another (an integrity check catching a modified function). Layers of different checks significantly increase an attacker’s workload to remain undetected.
Secure Critical Code Paths: Identify which parts of your app are most sensitive (login logic, payment processing, encryption, etc.), and apply extra scrutiny and protection to those. You might run additional integrity checks on these functions or even duplicate checks (fail-safe validations) to ensure they haven’t been altered. Some apps implement critical logic on the server-side as much as possible, to reduce what a hook on the client can achieve. For what must reside on the client, consider techniques like code obfuscation and anti-tamper controls so that hooking that code or finding the right spot to hook is more difficult for an attacker.
Obfuscate and Hide Your Detection Logic: If you write hook detection code in your app, assume attackers will try to locate and neutralize it. Use code obfuscation tools (which rename and restructure code) to make it hard for an attacker to identify the detection functions. You can also design your app so that security checks are performed in multiple places and in indirect ways (making it harder to simply patch out one function to disable detection). The goal is to avoid having a single obvious “HookDetector()” function that attackers can target.
Talsec's Perspective: A Pragmatic Approach to Obfuscation
At , we firmly believe that a layered security approach is the most effective way to protect mobile applications. Obfuscation is a crucial component of this strategy, acting as a vital deterrent against static analysis. However, we also recognize the trade-offs associated with different obfuscation techniques.
Our Stance on Obfuscation Types
Class Name Obfuscation and String Obfuscation
Podcast: iOS Keychain vs Android Keystore
Unlock the secrets of mobile security! In this insightful podcast, Devyany Vij (Senior Product Security Engineer @ Tide), Oleksandr Leushchenko @olexale (Google GDE, Engineer Manager @ Tide), and Tomáš Soukal (Senior Mobile Security Dev, Product Owner at Talsec) dive deep into the differences and unique capabilities of the Android Keystore and iOS Keychain—two essential tools every app developer should understand.
Discover how each platform protects sensitive data like encryption keys and passwords, what makes them secure, and how their access controls and hardware integrations work behind the scenes. Whether you’re building for Android, iOS, or both, you’ll get practical tips and clear explanations to help you choose the right approach for your next project. Perfect for developers who want to level up their app security knowledge—don’t miss it!
Big thanks to Majid Hajian @mhadaily (Azure & AI advocate @ Microsoft, Dart & Flutter community leader) for help with the production of this podcast
2) Data Theft and Privacy Risks
When your device is rooted, apps can bypass the usual privacy controls. This means an unauthorized app (or a hacker who slips malware onto your phone) could access all of your personal data — things like saved passwords, emails, text messages, photos, and banking
information are no longer off-limits. Android’s normal data separation is undermined, so sensitive information that would typically be protected can be read or stolen by any app with root access. For example, a seemingly harmless app could secretly steal your contacts or log your keystrokes to capture passwords. In short, rooting makes it possible for attackers or rogue apps to spy on you and harvest your private data, creating serious privacy risk
3) Compromise System Integrity
With root access, a malicious actor can take complete control of your device’s system, which threatens the integrity of your phone. For instance, some malware (known as rooting trojans) are designed to gain full remote control over a rooted phone — letting an attacker do anything as if they were holding the device in their hand. This could include installing backdoor programs that secretly grant ongoing access to your phone.
In practice, an attacker who infiltrates a rooted device could modify system files, change critical settings, or install hidden spyware without you knowing. They might even install rootkits (deeply buried malicious software) to hide their presence. In essence, a rooted phone can be hijacked, meaning a hacker could remotely use your device or alter it in dangerous ways that you never intended, undermining the phone’s normal operation and security.
Data Exfiltration: happens when attackers steal sensitive data by tampering with how an app works. For example, they can insert malicious code (called a hook) that secretly captures personal info like login credentials, credit card numbers, or private messages while the app is running. They can also spy on all network activity—like API calls in a banking app—to collect account details or passwords. In some cases, they can even access the app’s memory to grab secret data like encryption keys or tokens, and send that information to a remote server without the user ever knowing.
In summary, a hooked app is no longer acting entirely under its developer’s control – the attacker’s hooks can manipulate or spy on nearly everything. The above risks underscore why preventing and detecting hooking is critical for any app handling sensitive data or functions. Even for less sensitive apps (like games), hooking can ruin integrity (e.g. enabling cheating). Next, we’ll discuss what hook detection means as a defense against these threats.
, because an employee with a rooted phone could inadvertently allow malware to gain admin access to company resources. Mobile Device Management (MDM) solutions therefore include root detection to
block rooted devices from accessing corporate email, VPNs, or files.
DRM and Protected Content — Many digital content providers (video streaming services, premium content apps) rely on device security to enforce Digital Rights Management. Rooting can undermine DRM by giving users the ability to bypass restrictions (for instance, to save streaming videos or override screenshot/recording blocks). Because of this, apps like Netflix have taken measures to disallow rooted devices from using their service.
So we can summarize this to state that Root detection plays a vital role in maintaining the security and integrity of Android devices. Here are several key reasons why root detection is essential:
1. Preservation of System Integrity
2. Protection Against Malicious Software
3. Safeguarding Sensitive Data
4. Maintaining a Secure Ecosystem
5. Mitigation of Exploitation Techniques
• Preventing Large-Scale Abuse: Attackers often automate hooking attacks to target many users or accounts (for example, a fraud operation hooking multiple instances of a banking app to siphon money). If the app can detect hooking, it can shut down or notify server-side systems, preventing large-scale abuse. Essentially, hook detection can turn a potential silent failure into a visible alert, prompting incident response before too much damage is done.
In summary, without hook detection, an attacker with the right tools can turn an app inside out without anyone knowing. For developers and security teams, adding hook detection is crucial to raise the bar against advanced threats. Next, we’ll look at how exactly these detection systems work in practice on Android and iOS platforms.
Many jailbreak tools install files in known locations; the presence of any of those “known jailbreak files” is a strong indicator. (Developers often maintain a list of known file paths to check, including Cydia, Substrate, SSH daemons, etc.)
Checking for Sandbox Violation (Write Test)
Under normal conditions, an app is confined to its sandbox and cannot write to system directories. On a jailbroken device, the sandbox restrictions can be lifted for apps running with root. A common detection trick is to attempt to create or write to a file in a restricted location, such as the root of the file system or /private directory. If the write operation succeeds when it should have failed, the device is jailbroken
For example, writing a dummy file to /private/jailbreak_test.txt and then checking if it was created is a simple test – on a non-jailbroken device this operation will be denied, whereas on a jailbroken device it may succeed because the app might be running with higher privileges or outside the normal sandbox.
Looking for Suspicious Processes or Libraries
Many jailbreaks run background processes (like SSH daemons) or load additional dynamic libraries into apps. Developers can check the process list or loaded DYLIBs for known jailbreak components. One approach is to use the dyld (dynamic linker) APIs to enumerate loaded libraries in the app’s process and scan for names associated with jailbreak tools (e.g., substratelibrary.dylib, libhooker.dylib, Frida libraries, etc.)
if your app finds a library with a name like “frida” or “cydia substrate” loaded into itself, that’s a red flag the environment is compromised. However, this method can get complex and might be considered an advanced technique (since it involves low-level C APIs and string matching).
Detecting Root Access or Elevated Permissions
This is more of a generic principle behind several of the above methods. If your app suddenly has access to things it shouldn’t, something is wrong. For example, try to list files in /. If you get a directory listing of the device’s root filesystem, that means the app is not properly sandboxed (indicative of a jailbreak). Apple’s security model would normally prevent that. Another indicator is the presence of symbolic links where they shouldn’t be. Some jailbreaks relocate certain folders and create symlinks (for instance, a jailbreak might symlink /Applications to a different location to make more space). Checking for known symlinks (like /Applications being a symlink instead of a real directory) can also tip you off
Each of these detection methods can be implemented in Swift/Objective-C and run at app startup or at strategic points. Keep in mind that none of them are 100% foolproof on their own. Thus, combining multiple checks will strengthen your jailbroken device detection. Also, be aware of false positives and ensure you’re not violating any App Store guidelines (Apple doesn’t forbid jailbreak detection, but be careful with private API usage).
: Must-Haves for Sensitive Apps: We consider both class name and string obfuscation as essential baseline security measures for any application handling sensitive data or implementing critical business logic. The relatively low overhead and significant increase in analysis difficulty make them highly valuable in hindering casual attackers and raising the cost for more sophisticated ones. Implementing these techniques should be a standard practice in your mobile app development lifecycle.
Control-Flow Obfuscation: Reserved for Algorithm Protection: While control-flow obfuscation can offer a higher degree of protection against reverse engineering of specific algorithms, we believe its application should be carefully considered and generally reserved for scenarios where the application's core algorithm itself is a significant intellectual property asset.
The Challenges of Control-Flow Obfuscation
We acknowledge that control-flow obfuscation can introduce several complexities and potential issues:
Increased Integration Complexity: Integrating and configuring control-flow obfuscation tools can be more challenging compared to class and string obfuscation.
Potential for Non-Deterministic Bugs: The transformations applied by control-flow obfuscation can sometimes introduce subtle and hard-to-debug issues that may not manifest consistently.
Performance Impact: The added complexity in the control flow can potentially lead to performance overhead, impacting the application's responsiveness and battery consumption.
App Store Review Issues: Aggressive control-flow obfuscation techniques can sometimes be flagged by app store review processes due to the significant code modifications they introduce.
Our Recommendation for Algorithm Protection
If your application's core algorithm is a critical asset that requires a higher level of protection than class and string obfuscation can provide, we recommend a more targeted approach:
Isolate Sensitive Code: Move the algorithm's implementation to code written in a lower-level language like C or C++.
Separate Obfuscation: Apply robust obfuscation techniques specifically designed for C/C++ code to this isolated module.
Minimize Impact: By isolating the sensitive code, you limit the potential negative impacts of complex obfuscation on the main application codebase, reducing integration challenges, performance concerns, and the risk of introducing widespread bugs.
Challenges in Root Detection - Magisk Hide, Zygisk, Shamiko, Play Integrity Fix
Detecting root access in Android is notoriously difficult due to evolving root hiders.
Magisk Hide, MagiskHidePropsConf, Zygisk, Shamiko
The evolution of anti-root-detection tools on Android has been marked by continuous innovation to evade increasingly sophisticated detection mechanisms. Early efforts like Magisk introduced systemless root access and Magisk Hide + Zygisk, which allowed selective hiding of root from specific apps. MagiskHidePropsConf further enhanced evasion by modifying device properties such as build fingerprints to mimic unrooted devices, with its changelog on GitHub showing iterative updates improving compatibility and stealth. The LSPosed framework and its Shamiko module represent a newer generation of root hiding, leveraging advanced hooking techniques to mask root indicators more effectively on modern Android versions, as reflected in LSPosed’s GitHub release history.
The Age of PlayIntegrityFix Bypass
With the deprecation of Google’s SafetyNet Attestation API and the introduction of the Play Integrity API, root hiding tools faced new challenges. The Play Integrity API enforces hardware-backed device verification, making bypassing root detection more difficult without compromising the device’s Trusted Execution Environment (TEE). To address this, the Play Integrity Fix module, released in October 2023, emerged as a specialized solution to pass Play Integrity and SafetyNet verdicts by ensuring valid attestation without directly hiding root. It requires root and Zygisk-enabled environments (such as Magisk with Zygisk) and helps certify the device for Play Integrity tests, although it does not aim to hide root from other apps. This module is frequently updated to maintain compatibility with evolving Google attestation methods and device blacklists.
Together, these tools illustrate the ongoing cat-and-mouse dynamic in Android root detection and evasion. While MagiskHide and Shamiko focus on concealing root status from apps, Play Integrity Fix targets the newer Play Integrity API attestation to maintain device certification. Complementary modules like PlayIntegrityNEXT automate fingerprint updates to sustain passing attestation over time. This layered approach reflects the complexity of modern root hiding, where developers continuously adapt to Google's evolving security frameworks to preserve rooted device usability without detection.
1) Evasion and Hiding Techniques
Advanced Root Cloaking: some tools can mask the presence of common rooting artifacts (e.g., the su binary, superuser APKs). This enables a device to appear “unrooted” even when it isn’t.
Dynamic Hooking: Attackers may modify the runtime behavior of root detection methods using tools like Frida, effectively intercepting or falsifying the output of these checks.
2) False Positives and False Negatives
Ambiguous Indicators: Many detection methods rely on indicators like “test-keys” in the build properties or the presence of files such as Superuser.apk. However, these indicators can sometimes be present on non-rooted or development devices, leading to false positives.
Inconsistent Results: Due to the variability of rooting methods and custom ROMs, the same detection method may work on one device but fail on another.
3) Diverse Android Ecosystem
OS and Vendor Modifications: Some manufacturers or custom ROM developers change system configurations or file structures, which can interfere with root detection heuristics
4) Limited Visibility and Sandbox Restrictions
Restricted System Access: Applications operate in a sandbox, limiting their access to system-level information. This restriction is designed to protect privacy and security but also makes it harder to collect comprehensive data needed to confirm root status.
5) Rapidly Evolving Techniques
Continuous evolve race: As security measures improve, rooting tools evolve simultaneously to bypass these measures. This dynamic environment forces developers to continuously update their detection libraries to cover new bypass techniques.
6) Trade-offs Between Security and User Experience
User Impact: Some users intentionally root their devices for legitimate reasons (customization, performance tweaking, etc.). Overly aggressive detection may block these users or degrade their experience, while too lenient a policy might let malicious apps bypass security checks.
App Size: Integrating and updating multiple root detection methods (or libraries) to keep up with the latest evasion tactics can increase the APK size and maintenance complexity.
Many developers choose to integrate a professional RASP solution for robust jailbreak detection and more. RASP stands for Runtime Application Self-Protection – it’s essentially an SDK you include in your app that continuously monitors for threats (like jailbreak, rooting, hooking, debugging, emulator, etc.) and can respond accordingly. One example is Talsec’s library, which is a free-to-use mobile security SDK. According to Talsec, can detect if the app is running on a rooted/jailbroken device and give you the ability to determine the action to be triggered if one of those is detected. It also looks for runtime hooking tools like Frida and prevents debuggers from attaching
By using such a library, developers get a suite of checks out-of-the-box, maintained by security experts. FreeRASP provides basic protection for free, and Talsec also offers RASP+ (a paid, enterprise-grade version) which includes even more advanced detection capabilities (for example, more aggressive jailbreak hiding countermeasures, compliance reporting, etc.). These tools often come with cloud dashboards or callbacks so you can be alerted if one of your users has a jailbroken device or if an attack is detected.
Multi-Layered Checks and Deception
A clever strategy is to implement multiple layers of checks throughout the app. Instead of just one check at launch, you scatter jailbreak detection routines in different parts of the codebase (and perhaps even in time intervals). This way, if an attacker patches one check, another might still catch the device later. Some apps even implement “honey traps” – checks that are not obvious, so a jailbreak bypass tweak might fail to neutralize all of them. For example, an app might perform a jailbreak check when a certain rarely-used feature is triggered, catching the attacker off-guard. The idea is to make bypassing all your detection points tedious and prone to error. By keeping these methods updated (as new jailbreaks and bypasses emerge), you maintain an edge. This is again where using maintained RASP solutions helps; their teams update the SDK to handle new jailbreak tools or bypass tricks, so you can update your app and stay secure.
Use Tamper-Resistant Tools and SDKs: Consider leveraging specialized security SDKs or services (often called Mobile App Security or RASP solutions, like Talsec and) that provide built-in hook detection and prevention. These can be libraries you include in your app that continuously monitor for threats. They often receive updates from the vendor as new threats emerge. Examples include commercial services that offer mobile app shielding or attestation. If budget allows, this can offload the heavy lifting of implementing detection from your team, and they typically use advanced techniques under the hood.
Regular Testing (Think Like an Attacker): Continuously test your own app’s security. Use the same tools attackers use (Frida, Xposed modules, etc.) in a controlled environment to see if your detection triggers. This can be part of your quality assurance for security. There are open-source tools and frameworks that can simulate hooking attempts; integrate those into your testing cycle. By doing this, you might discover bypasses or weaknesses in your detection before the bad guys do.
Secure the Entire Ecosystem: Hook detection in the app is important, but also consider server-side measures. For example, if your app suddenly stops sending certain heartbeat signals or attestation proofs that it normally does (perhaps because a hook disabled those), the server can flag the session as potentially compromised and limit actions. Similarly, monitor usage patterns: if someone using hooking bypasses a UI flow (things happen in the app faster or in a weird sequence not possible normally), you might catch it via analytics. This goes beyond the app code itself but is part of a holistic security approach.
By following these best practices, developers can create a robust shield against hooking. It’s about making your app a hard target – so that attackers either give up or find that any attempt to hook results in them being detected and thwarted.
Meet the Talsec Community [Apply to Join!]
Welcome to the hub of our community! This page is your go-to resource for staying connected with Talsec.
Whether you're looking for upcoming events, ways to engage on social media, or a quick overview of our key programs, you've come to the right place.
Connect with Us on Social Media
Keep up with Talsec through our social channels.
& : Follow us for quick updates, insights, and live event coverage.
: Join our professional network for industry news and event highlights.
: Join the conversation — share ideas, ask questions, and collaborate with fellow developers.
Join the conversation and be part of our growing community across all platforms!
Join the TALSEE Championship Program
Welcome to the TALSEE (TALsec SEcurity Experts) Championship Program—the premier initiative for active, high-impact Talsec community contributors focused on mobile security across all leading platforms. Whether you're working with Android, Flutter, React Native, Swift, or other mobile technologies, this is your platform to lead innovative discussions and projects that secure the future of mobile tech.
Earn your Talsec Integration Professional status. An individual discount code (and link) that you can use in your projects to get better prices for Talsec products. NB: Talsec Integration Partner status and reference fees are offered to Business Entities.
What Mobile Security Topics Will Be Explored?
As a TALSEE Champion, you'll dive into cutting-edge discussions and create impactful content addressing the latest challenges and innovations in mobile security, including:
Platform-Specific Security Challenges: Explore the unique security requirements and vulnerabilities in Android, iOS (Swift), Flutter, and React Native. Analyze platform-specific threats and share best practices for each environment.
Secure Mobile App Development: Discuss robust, secure coding practices, from encryption and secure data storage to authentication and network security, tailored for mobile applications.
Emerging Threats & Defensive Innovations: Stay ahead by addressing the latest mobile threats—from malware and ransomware to zero-day vulnerabilities—and develop innovative defense strategies.
Do you have anything else in mind? Bring it up in our community discussion.
Who Is It For?
The TALSEE Championship is exclusive and tailored for individuals who:
Have Bold Ideas: If you’re ready to drive ongoing projects beyond one-off contributions, this program is for you.
Seek Active Engagement: Participate in meetings and collaborative sessions with fellow experts and the Talsec team.
Aim to Lead: Whether you’re a seasoned professional or an emerging leader, join us to make meaningful contributions and gain visibility in the security space.
The program accepts members by invitation and initial interview only.
Exclusive Perks of Being a TALSEE Champion
Earn your Talsec Integration Professional status. An individual discount code (and link) that you can use in your projects to get better prices for Talsec products. NB: Talsec Integration Partner status and reference fees are offered to Business Entities.
Talsec Integration Professional status.
High-Impact Networking: Gain exclusive access to a closed group of top security professionals and Talsec employees, offering unparalleled networking and mentorship opportunities.
Financial Incentives: Receive competitive compensation starting at $200+ per project, content, and activity, with opportunities for bonuses based on impact and reach.
What You'll Create
As a TALSEE Champion, you’re not just generating content but driving change and innovation across mobile security. Here’s what you can expect to create and contribute:
Thought Leadership Content:
Articles & White Papers: In-depth pieces that analyze platform-specific security challenges, provide actionable insights, and share success stories.
Interactive Presentations & Webinars: Live sessions to discuss trends, demonstrate techniques, and engage directly with peers and industry experts.
Your role as a TALSEE Champion goes beyond traditional content creation—you’re a catalyst for change, building a lasting impact on mobile security innovation. If you have other ideas or suggestions for topics and initiatives, we’re always ready to discuss and explore new avenues together!
How to Get Started
Submit Your Idea: Propose a topic or draft an outline that aligns with our mobile security themes.
Outline Review: Our team will review your submission, provide feedback, and ensure your topic fits our quality standards.
Compensation agreement: Let's negotiate your compensation based on the final approved outline.
Ready to Transform Your Ideas Into Impact?
If you're passionate about security and eager to lead high-impact projects, the TALSEE Championship Program is your launchpad for change. Here, your innovative ideas will spark conversation and become the foundation for transformative content that resonates across the industry.
and take the next step in your professional journey.
Become a TALSEE champion, and let’s drive the future of security together!
Video Injection
For many KYC (Know Your Customer) vendors, video stream injection is the "final boss" of fraud. It’s the process of bypassing a smartphone’s physical camera sensor to feed pre-recorded or AI-generated deepfakes directly into the application's media pipeline.
If successful, an attacker can register thousands of fraudulent accounts using stolen identities without ever showing their real face.
How Is Video Injected
Attackers typically use three main vectors:
Hooking: Using LSPosed or VCAM modules to intercept Camera API calls and swap the live feed for a file like virtual.mp4.
Emulators: Running the app in BlueStacks or Nox and using OBS VirtualCam to map a PC video feed as the "phone camera".
Automation: Using the Appium framework to script the entire KYC process, often utilizing plugins that instrument the app to inject images.
The Solution: Talsec's Defensive Mapping
Because these tools require specific "illegal" environments to function, Talsec’s core features act as a multi-layered filter that stops the injection before the camera even opens.
Threat Vector
Talsec Relevant Feature
Why it Works
*This information can be securely evaluated on the customer backend endpoint if Talsec AppiCrypt is used as well for enhanced security
Talsec's Commitment to Comprehensive Security
While Talsec doesn't directly provide control-flow obfuscation for the main application code due to the aforementioned complexities, we are committed to offering our partners a holistic security solution.
We can recommend and facilitate integration with reliable third-party tools that specialize in obfuscation enabling you to effectively protect your most critical algorithms without compromising the stability and maintainability of your primary application code.
Talsec Obfuscation Solutions
Obfuscation Method & Protection
Advantages
Examples of Protected Assets
Platform-Specific Keywords or Settings (Android, iOS, Flutter)
Talsec Solution
Talsec RASP+ and AppiCrypt for Apple TV Apps
Talsec SDK brings advanced protection already available on iOS or Android to Apple TV apps, helping streaming providers and TV app makers keep premium content, user accounts, and revenues safe across the entire Apple TV experience.
Talsec is bringing its premium Runtime Application Self-Protection (RASP) SDK to Apple TV, enabling developers to secure apps running in the Apple TV app ecosystem where users access Apple Originals, live sports, premium channels, and thousands of transactional titles in one place. Talsec ensures that your app stays protected from emerging big‑screen threats. The SDK covers key threat vectors such as debugging and runtime tampering, jailbroken or simulated environments, app integrity and distribution abuse.
Why Apple TV apps need protection
Apple TV apps operate in a high-value environment that concentrates premium content, subscription entitlements, and transactional workflows into a single execution surface. This makes them a prime target for attacks such as debugging, runtime manipulation, and signature bypassing aimed at disabling in-app controls, extracting credentials, or unlocking paid content.
In addition to runtime attacks, Apple TV apps are frequently targeted by geo-evasion techniques. VPNs, Smart DNS services, and location spoofing are commonly used to bypass territorial content licensing restrictions, access unavailable regional catalogs, or exploit regional pricing differences. For streaming providers operating under strict country-based licensing agreements, this creates direct compliance risks, revenue leakage, and potential contractual violations.
Jailbroken or simulated devices, unofficial distribution channels, altered device identities, and anonymized network environments further enable large-scale piracy, account sharing abuse, and entitlement fraud.
Use-Cases
By employing Talsec’s security, you can safely monetize:
Subscription and channel access, including add-on services and family sharing
Transactional libraries for buy or rent content
Live sports packages, ad-funded experiences, and hybrid business models across devices and living-room screens
Talsec’s RASP for Apple TV delivers proven benefits for media creators, including content protection against piracy, user and account security to prevent takeovers, revenue protection for subscriptions and ad impressions, licensing compliance, and brand reputation.
How Talsec protects Apple TV apps
Talsec embeds self-defense directly into the app, enabling real-time detection and reaction to attacks on tvOS devices. Device integrity checks identify compromised or simulated environments, while runtime protections prevent debugging, tampering, and unauthorized modification.
Talsec also detects VPN usage, anonymization services, and network manipulation techniques at runtime, allowing Apple TV apps to enforce territorial licensing rules before entitlement validation or playback begins.
API integrity via AppiCrypt secures communication between Apple TV apps and backend services, ensuring that only genuine, untampered clients operating in compliant environments can access APIs and request sensitive operations.
Optimize DRM Costs
Managing a multi-DRM strategy with Apple FairPlay Streaming often creates a significant OPEX burden. The requirement to maintain a rigid Key Security Module (KSM) and pay recurring license delivery fees to third-party DRM vendors drains engineering and operational resources.
Traditional Digital Rights Management has critical blind spots. While it ensures HDCP compliance for video output, it remains blind to the device’s runtime health, network context, and location integrity. It often fails to prevent stream ripping on jailbroken Apple TVs or misuse via VPN-enabled cross-border access. Encryption protects the data pipe but does not validate the secure playback environment, leaving premium content vulnerable.
Talsec Runtime Application Self-Protection (RASP) improves OTT security by blocking insecure or non-compliant devices before they trigger costly DRM license issuance. By preventing playback on jailbroken devices, compromised runtimes, or unauthorized geographic locations, Talsec reduces DRM license waste and strengthens anti-piracy enforcement without the heavy integration overhead of standalone FairPlay deployments.
Business impact and next steps
For streaming services, sports leagues, broadcasters, and aggregators building for Apple TV, Talsec helps protect high-margin content, enforce licensing compliance, and control operational costs by preventing incidents rather than reacting to them.
To learn more about securing your Apple TV apps with Talsec, reach out to our experts.
Obfuscation
Explore what obfuscation is, how different obfuscation methods protect mobile apps, and how Talsec’s pragmatic approach balances security, performance, and developer experience.
This article will delve into the concept of obfuscation, explore its different types, and articulate 's philosophy on its application. We believe in a balanced and pragmatic approach, prioritizing the most developer experience, app performance, exploitability of attack techniques while minimizing potential drawbacks and considering cost efficiency, to ensure both security and the smooth business operation of your mobile applications.
How Hook Detection Works
Hook detection mechanisms can differ between Android and iOS due to the differences in their operating systems. Below, we break down how hook detection typically works on each platform.
For Android
On Android, hooking often relies on tools that inject code into apps or modify the Android runtime (often requiring root access). Thus, Android apps employ a variety of methods to detect such interference:
Opening Keynote: Safety/Security Equilibrium with Sergiy Yakymchuk (Talsec)
The Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
In the rapidly evolving landscape of mobile technology, a striking paradox has emerged: while global investment in cybersecurity continues to grow exponentially, the financial losses attributed to cybercrime are rising even faster. This disconnect suggests a fundamental flaw in our current approach to digital protection.
During a recent industry keynote, Sergiy Yakymchuk, co-founder of Talsec, challenged the community to look beyond engineering-driven solutions and address the subjective core of the problem: the human perception of safety.
The Engineering Bias and the Mobile Shift
Historically, cybersecurity expertise has been deeply rooted in infrastructure, network, and perimeter security. However, the last decade has seen a massive migration toward mobile-first applications, often leaving these new environments vulnerable.
freeRASP for Unity Guide
Protect your Unity mobile game with freeRASP, a free and developer-friendly runtime application self-protection solution for Android and iOS.
🔐 Level Up Your Game’s Security with freeRASP for Unity
In today’s mobile landscape, securing your game isn’t just about protecting profits—it’s about preserving your players’ trust and ensuring fair gameplay. Whether you’re an indie developer or a full-blown game studio, runtime threats like app tampering, emulators, rooting, and unauthorized modifications are real dangers that can undermine your game’s performance and credibility.
That’s where freeRASP for Unity (Android + iOS) comes in.
Checking for Hooking Frameworks: The app can programmatically look for signs of popular hooking frameworks like Frida or Xposed. For instance, it might scan running processes and loaded libraries for names associated with Frida (such as a process listening on Frida’s default port, or the presence of libfrida libraries in memory). Similarly, it might try to detect Xposed by searching for known Xposed classes (like de.robv.android.xposed.XposedBridge) or files that the Xposed framework leaves on the system. If any of these indicators are found, the app concludes a hook might be present.
Runtime Integrity Checks: The app can perform self-checks on its code at runtime. This could involve verifying checksums or hashes of critical code segments to ensure they haven’t been modified in memory. If a hooking tool has patched or overwritten some instructions (as hooks often do), these integrity checks might catch the unexpected changes
Anti-Debugging Measures: Many hooking tools use debugging techniques under the hood (they attach to the app process similarly to how a debugger would). To counter this, apps implement anti-debugging. Techniques include detecting if the process is being debugged (using system calls like ptrace on Android), or if suspicious system flags are set that indicate an attachment. If the app notices it’s under a debugger or instrumentation when it shouldn’t be, it can assume a hooking attempt (since Frida often uses a JDWP debugger or ptrace mechanism). Some apps even deliberately perform slight “delays” or check execution timing, since a hooked environment may respond slightly slower due to the instrumentation.
Environment and Device Checks: A lot of hooking on Android requires root access. So apps often integrate root detection as part of hook detection. They check for root indicators (presence of su binary, known root app packages, modified system paths). Additionally, they check if the app is running in unusual environments: for example, inside an emulator or a virtual space (there are tools that allow hooking without root by running the app in a modified virtual environment). If the app detects it’s in an emulator or sees signs of frameworks like VirtualXposed (which allows Xposed modules without root), it can raise a red flag. Google’s SafetyNet/Play Integrity API results can also inform this – if the device fails integrity checks, the app may suspect that hooking tools could be active and be extra vigilant.
Monitoring System Calls and APIs: Advanced implementations may monitor certain sensitive API calls that hooking tools use. For example, on Android, a hooking tool might use the ptrace system call to attach to the app, or it may call functions to load custom libraries. An app can attempt to detect if those actions occur during its runtime. While an app can’t easily block these low-level calls (without OS support), noticing them can be a sign of intrusion.
Once any of these methods detects something fishy, the app can take action. Common responses on Android include: immediately closing the app (to prevent further tampering), logging the event (for analytics or forensic purposes), or disabling sensitive features (e.g. not allowing login or transactions if hooking is detected). Some security-conscious apps display a warning like “Untrusted environment detected” and exit, which is a direct result of these checks.
For IOS
On iOS, hooking is less common in non-jailbroken devices due to Apple’s strict app sandbox and code signing requirements. However, in jailbroken devices (where a user has removed iOS restrictions), hooking becomes possible using tools like Cydia Substrate or Frida. Therefore, iOS hook detection often starts with jailbreak detection and then goes into deeper checks:
Jailbreak Detection: Since an iOS app normally shouldn’t be able to have new code injected at runtime (unless the device is jailbroken), the first step is checking if the device is compromised. The app checks for signs like the existence of Cydia (the jailbreak app store), the ability to write to areas of the filesystem that should be protected, or the presence of known jailbreak files and processes. If the device is jailbroken, the app assumes the security model is broken and hooking is possible, and may choose to not run or run in a restricted mode.
Detecting Injected Libraries: iOS hooking frameworks (like Substrate or Frida’s Gadget) work by injecting dynamic libraries (dylibs) into apps. An app can programmatically list the dynamic libraries loaded in its process (using APIs like dyld functions). If it finds a library that is not part of iOS or the app itself, that’s a big warning sign. For example, if something like FridaGadget.dylib or any library residing in ‘ /Library/MobileSubstrate/ ‘ is loaded, the app is likely hooked. Apps can maintain a list of expected module names and paths, and flag any anomalies.
Function Pointer Verification: iOS developers can perform checks on critical functions to see if they’ve been tampered with. For example, iOS apps might verify that certain Security or Crypto functions are still pointing to Apple’s original implementations. If a hook (via Substrate or the fishhook technique) has rerouted those functions to attacker code, the addresses or hashes won’t match the known good values. This is a more technical method, but it can catch cases where an attacker patches functions in memory. Essentially, the app knows what the start of a function is supposed to look like; if it’s been overwritten with a jump to elsewhere, that indicates a hook.
Anti-Debug and Anti-Attach: Just as on Android, on iOS an app can detect debugging attempts. iOS provides APIs to check if the process is being debugged (like checking the AMFID and other process flags). Also, iOS apps can use the ptrace call with the PT_DENY_ATTACH option to proactively prevent debuggers (which also blocks some hooking techniques that rely on debugger attachment). If an attacker cannot attach a debugger or instrument easily, it thwarts many hooking attempts. Some apps repeatedly check for any new process that tries to attach and immediately kill the app if detected.
Timing and Behavior Checks: This is less common on iOS due to the performance overhead, but conceptually an app could perform operations that should be quick and see if there’s an unusual delay or behavior (which might happen if a hook is intercepting calls). Also, the app might check if certain system calls return expected values; if a hook is faking responses (like returning false for “Is this device jailbroken?” when it actually is), a clever app might cross-verify via multiple methods to catch the lie.
In practice, iOS relies heavily on the assumption that if the device isn’t jailbroken, hooking is nearly impossible. So many iOS apps simply don’t worry about hooking beyond jailbreak detection. But for high-security apps that must even consider malicious insiders with jailbroken devices, the above measures add extra layers of defense. If a hook is detected, iOS apps often react similarly to Android ones: shut down or disable functionality. For instance, some banking apps on iOS will instantly close if they sense any jailbreak/hook to protect the user’s account.
Cross-Platform Security Best Practices: Investigate how to build resilient, secure applications across multiple mobile platforms, ensuring consistent protection while maintaining performance and usability.
Incident Response & Forensics in Mobile: Share strategies and case studies on effective incident response, breach recovery, and mobile forensic analysis to mitigate and learn from security incidents.
Compliance & Regulatory Considerations: Dive into the evolving landscape of mobile security compliance, exploring how regulatory standards impact development and security practices across mobile ecosystems.
API Protection technologies: App and device attestation, authentication and authorization, and API security threats (bots, scraping, and more).
Build your professional portfolio, shape industry standards, and enjoy dedicated support from idea submission and outline review to final publishing on our trusted security platform.
Video Tutorials & Podcasts: Multimedia content that makes complex mobile security topics accessible across Android, iOS, Flutter, React Native, and more.
Impactful Initiatives Beyond Content:
Innovative Ideas & Community Building: Cultivate new thoughts and initiatives that spark dialogue, inspire change, and encourage the formation of dedicated security communities.
Collaborative Projects: Drive group projects and partnerships beyond a single piece of content—think hackathons, workshops, or security challenges that unite the community.
Strategic Initiatives: Launch campaigns or programs that address emerging mobile security issues, setting the stage for long-term impact in the industry.
Open Dialogue and Brainstorming:
Idea Exchanges: Be part of a continuous brainstorming environment where fresh ideas are welcomed, discussed, and developed collaboratively.
Feedback Loops: Engage in dynamic discussions with fellow champions and Talsec experts to refine concepts and maximize their impact.
Draft & Feedback: Develop your complete draft and incorporate constructive feedback from our editorial team.
Publish & Promote: Once finalized, your work will be published on our platform and promoted through our channels.
Receive Compensation: Enjoy competitive payment and the satisfaction of contributing valuable insights to the mobile security community.
Flutter Plugin Attack: Mechanics and Prevention - Jaroslav Novotný — Senior Flutter Developer
Flutter Security 101: Restricting Installs to Protect Your App from Unofficial Sources — Marco Galetta,Senior Software Engineer
How to Block Screenshots, Screen Recording, and Remote Access Tools in Android and iOS Apps — Tomáš Soukal, Senior Mobile Security Developer
How to implement Secure Storage in Flutter? — Lucas Oliveira,Apple technologies, Flutter, Developer Mentorship
User Authentication Risks Coverage in Flutter Mobile Apps — Himesh Panchal, Web and Tech enthusiast, working with Flutter since its 1.0, mobile app CI/CD workflows, sharing knowledge with the developer community
dash_crypt: encryption and decryption package for Flutter and Dart — Ahmed Ayman, Senior Flutter Mobile Developer
Published Articles
Open Content Topics
Simple Root Detection: Implementation and verification
Hook, Hack, Defend: Frida’s Impact on Mobile Security & How to Fight Back
Country-based content licensing enforcement through detection of VPNs, Smart DNS services, and location spoofing
Automation Detection
Appium leaves traces in the uiautomator service and often requires ADB/Developer Options to be enabled, both of which Talsec detects.
Repackaged Testing Builds
App Integrity Checks
Attackers sometimes re-sign the APK to disable security for automation. Talsec’s signature and binary integrity checks prevent these modified builds from running.
LSPosed with VCAM Module
Root & Hook Detection
VCAM requires a rooted device (Magisk) and an active hooking framework (LSPosed/Frida) to function. Talsec can kill the session the moment it sees these artifacts.
Emulators (BlueStacks) (+ OBS)
Emulator Detection
Injections via OBS happen at the virtualization layer. Talsec detects common emulators and can block the app entirely.
Appium Framework
What is Hook Detection?
Hook detection refers to the defensive techniques and mechanisms that a mobile application (or the platform it runs on) uses to recognize when it is being hooked or tampered with at runtime. In other words, it’s like an alarm system that alerts the app if an unauthorized code is trying to latch onto it or modify its behavior. When an app has hook detection measures, it actively looks for signs that a hooking tool or framework is present either in the device environment or within the app’s process.
API keys, Sensitive URLs, Configuration values
Talsec.getSecret("apiKey01")
Runtime Application Protection
Protection at runtime
Protect against debugger attach, runtime manipulations (Frida hooking)
Dynamic Secret Provisioning
Secrets are provisioned dynamically in a secure way
Remotely pushed API keys, Sensitive URLs, Provisioning of large assets and media from the backend on demand
AppiCrypt-strengthened HTTP communication and remotely managed SecretVault
Compounding this shift is a persistent "engineering bias"—a tendency for developers to focus on solvable, predictive technical problems rather than the messy, unpredictable reality of human behavior. Despite sophisticated systems, statistics show that the majority of security breaches are still caused by human error.
Security vs. Safety: A Subjective Divide
One of the most critical distinctions raised is the difference between objective security and the subjective feeling of safety.
Objective Security: The technical, often mathematical, measures taken to protect a system.
Subjective Safety: An individual's personal perception and feeling of being secure.
Yakymchuk argues that for a security product to provide true value, it must serve as a precondition for this feeling of safety. When a system becomes too restrictive or surveillance-heavy in the name of security, it can lead to "overkill," causing users to abandon the service or find insecure workarounds, such as writing passwords on paper.
The Digital Social Contract
The balance between freedom and security is an age-old concept often defined by a "social contract". In the physical world, citizens may trade certain rights to a government in exchange for protection.
In the digital world, however, this contract is often fragmented and opaque. Users "sign" individual contracts with every service they use, frequently without reading the lengthy terms and conditions. Recent silent updates to terms regarding AI training on platforms like LinkedIn highlight the lack of transparency in how these digital contracts are managed.
Building for a "Safe" Future
For companies like Talsec, the goal is to move beyond being a mere "cost line in a budget" to becoming a "safe choice" for CTOs and developers. Achieving this requires a deeper understanding of what truly provides users with a sense of safety in a world of diverse digital "fortresses"—from the rigid ecosystem of the "iOS Kingdom" to the more varied "Android Union".
Ultimately, the cybersecurity industry must ask: are we solving the right problem? By centering the human experience and the subjective need for safety, developers and architects can begin to bridge the gap between technical resilience and user trust.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
Talsec’s freeRASP solution is now natively available for Unity—a game-changer for developers targeting Android and iOS platforms. This early-release plugin empowers teams to effortlessly integrate robust runtime threat detection directly into their Unity-based games.
⚙️ Easy Integration, Powerful Protection
The official Unity plugin streamlines the setup process:
📦 Drag-and-drop installation via .unitypackage
🔐 Custom configuration for Android (via certificate hashes and package names) and iOS (using team identifiers and bundle IDs)
🔁 Real-time threat detection callbacks integrated into your Game.cs or entry-point logic
This makes it simple to respond to threats like:
Emulator use
VPN or system proxy tunneling
Rooting or jailbreaking
Debugging and hooking attempts
Tampering, screen recording, or app repackaging
With built-in detection callbacks, you can proactively respond to these threats by triggering in-game behaviors or alerts that protect your app integrity.
🚀 Why It Matters for Game Developers
While Unity makes multiplatform publishing easier than ever, it also exposes your game to a wide surface area of potential exploits. Mobile cheaters and malicious actors often use rooted devices, simulators, or altered app binaries to bypass normal game logic or manipulate in-game currencies.
freeRASP gives you the armor your game deserves, acting like a security co-pilot as your players navigate your game world.
💡 Ready to Fortify Your Game?
Visit the to get started with installation, configuration, and callback integration. There’s no need to compromise between performance and protection—freeRASP for Unity delivers both.
TechTalk: Predictive Apps Protection with Sergiy Yakymchuk (Talsec)
The Talsec Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
In a recent presentation, Sergiy Yakymchuk, CEO of Talsec, discussed the evolution of mobile application security, moving beyond simple detection toward a framework of predictive safety. He outlined three essential pillars that form the foundation of a "safe choice" for vendors and end users alike: collective defense, predictability, and user awareness.
1. Collective Defense: The "Forest Metaphor"
Safety is fundamentally intuitive; humans feel more secure when facing a common enemy together. Effective digital defense is like a forest where trees communicate threats via a fungal network. When one part of the forest is attacked by pests or disease, other trees receive messages and generate protective chemicals.
Talsec applies this principle through a community-driven model:
Vulnerability Sharing: Feedback from one user regarding a system vulnerability helps improve the platform for all.
Data-Driven Improvements: Using data from community versions of tools, such as free RASP (Runtime Application Self-Protection), the system can identify false positives and conflicts with new OS updates across various devices.
Structured Responsibility: Despite being a collective effort, there must be clear boundaries and mechanisms to define who is responsible for data and specific actions in various scenarios.
2. Predictability through Real-Time Risk Scoring
The ability to predict potential threats is a precondition for safety. In the context of business applications, this means delivering features that can anticipate anomalies and calculate risk scores before a transaction is completed.
Technological solutions supporting this include:
AppCrypt: An SDK-based technology that creates a "cryptographical snapshot" of a device's security state. This cryptogram is verified on the backend in real-time to detect threats like debuggers or simulators.
Device State API: A newer API-driven approach that allows businesses to check a device's security state at any time, independent of whether the user is currently using the app.
Contextual Risk Scoring: Moving beyond simple integrity checks, Talsec is developing systems that consider contextual data, such as transaction amounts, recipient information, and location history to identify anomalies.
3. Awareness: Bridging the Gap with AI
The final pillar is awareness, ensuring that users and organizations have the information they need to defend themselves.
Global Benchmarking: Talsec provides global statistics on threats, such as the prevalence of rooted devices, cloned apps, and integrity problems across different countries, allowing CTOs and CISOs to make informed decisions.
AI-Enhanced Communication: Talsec is utilizing AI to translate technical signals (e.g., "device is in developer mode") into human-understandable language. This allows the platform to explain why a specific device state is risky and provide tailored advice based on the user's profile or culture.
An example use case involves blocking sensitive entry fields, such as credit card numbers, until a device security check is passed. If a high risk is detected—such as an SMS forwarder app that could steal an OTP—the user is informed through an AI-generated summary of exactly what they need to do to secure their device and proceed safely.
By combining these three pillars, organizations can move toward a more resilient and transparent security model that protects both the business and the end user.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Keynote: Cloudflare for AppSec with Anatol Nikiforov (Cloudflare)
The Talsec Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
Anatol Nikiforov of Cloudflare presented a keynote on the rapidly evolving landscape of Application Security (AppSec), focusing on how AI and the rise of residential proxies are creating sophisticated new challenges for cybersecurity providers.
The Exponential Speed of Change
Technological adoption is accelerating at an unprecedented pace. While mobile phones took 15 years to reach 100 million users and Facebook three and a half years, ChatGPT achieved the same user base in just two months. This rapid adoption is transforming cybersecurity, particularly in the domain of automated traffic and bots.
Identifying bots has become increasingly complex. Modern bots are far more sophisticated than earlier versions, which relied on IP addresses and contextual clues:
AI Amplification: Advanced bots leverage AI to learn and evade detection.
Evasion: Bots bypass visual challenges such as CAPTCHAs using computer vision and rapidly change IP addresses and behaviors.
Scale of Attack: Bot attacks are growing exponentially. The largest Distributed Denial-of-Service (DDoS) attack recorded last year reached 4 terabits per second; recently, a 29.7 terabit-per-second attack was mitigated, representing an eight-fold increase.
The Rise of Botnets and Residential Proxies
The 29.7 Tbps attack originated from a botnet called Isuru, which evolved from the Mirai botnet. Isuru has compromised over a million devices, primarily IoT devices such as routers and CCTV cameras.
Botnets have shifted their business model from offering "botnet as a service" to providing residential proxies.
Residential Proxy Overview
Residential proxies infect devices, like user routers, and sell their network capacity on the darknet. This enables hackers to launch attacks using legitimate residential IP addresses that carry reputational trust online. Malicious activity typically uses a small fraction of the user’s bandwidth (e.g., 7–10%) to avoid detection.
Device infection often occurs through N-day attacks, exploiting known vulnerabilities for which patches exist but have not been applied.
The Scale of the Residential Proxy Business
Residential proxies are not exclusive to cybercriminals. Legitimate companies also operate residential proxy services, selling access to IP addresses for advertising or AI model training. Users often unknowingly permit up to 10% of their internet traffic to be used via agreements in app terms, such as free VPNs or ad blockers. Globally, approximately 250 million IP addresses participate in the residential proxy ecosystem.
Combating Evasive Bots with Personalized Security
Identifying residential proxies is challenging, even with advanced machine learning models, because malicious requests originate from legitimate IP addresses. Bots often rotate IPs per request, preventing detection based on repeated requests from a single IP.
Cloudflare developed ML version 8 (ML8) to differentiate residential proxies by analyzing individual requests rather than IP addresses alone. Key outcomes from ML8 implementation include:
Detection of 17 million new residential IP addresses per hour during the initial rollout.
Identification of 95% of evasive attacks previously difficult to detect.
A 20% increase in bot detections from cloud providers such as AWS and Digital Ocean, leveraging behavioral signals rather than IP alone.
Personalized security further enhances protection by tailoring machine learning models to specific enterprise traffic patterns. This approach addresses variations in what constitutes abuse for different organizations.
The personalized security process includes:
Dynamic Baseline: Establishing normal traffic patterns over time, accounting for seasonality or release spikes.
Identifying Anomalies: Detecting abnormal behavior specific to a website or application, such as methodical scraping of gaming data.
Automated Rules: Flagging anomalous bots and automatically generating rules to adjust bot scores for the specific customer.
Beta tests with five enterprise customers showed that personalized security detected 34% more abuses than traditional methods.
Web Bot Authentication: Whitelisting Legitimate Bots
Legitimate AI agents, or "agentic bots," require secure authorization for actions such as making transactions on behalf of users. Visa and Mastercard implemented Web Bot Authentication to authorize these bots using cryptographic verification.
Requirements for agentic bots include:
Registration: Bots register on the platform and provide cryptographic keys.
Signature and Nonce: Each request contains a signature and a nonce to prevent replay attacks.
Key ID, Timeline, and Tag: Requests include a Key ID, transaction timeline, and an intent tag (e.g., purchase or browse).
This system operates without infrastructure changes for Visa or Mastercard. OpenAI has agreed to implement it, and Cloudflare provides an SDK for developers using Cloudflare Workers to build agentic AI applications compliant with these security protocols.
Thank you Anatol and team for sharing insights on combating modern AI-amplified bots and the evolving cybersecurity landscape. The detailed analysis of botnets, residential proxies, and the exponential scale of attacks provides valuable context for understanding today’s threats.
Deconstructing Obfuscation: Three Key Types
The concept of obfuscation can be broadly categorized into three distinct types, each targeting different aspects of the application's code:
A) Name Obfuscation for Classes, Methods, and Fields
This type of obfuscation focuses on renaming the classes, interfaces, methods, and fields within the application's code to meaningless and often short identifiers. Instead of descriptive names like UserManager, authenticateUser, or userPassword, these elements might be renamed to something like a, b, or c.
Key Concepts
Renaming: The core mechanism involves replacing meaningful names with arbitrary strings.
Reduced Readability: This significantly hinders an attacker's ability to understand the purpose and functionality of different code components simply by examining their names. It breaks the semantic link between the code and its intended behavior.
Limited Complexity: Name obfuscation is generally the least complex type of obfuscation to implement and has minimal impact on the application's performance or stability.
Example
Consider a class responsible for handling user sessions:
After class name obfuscation, this might become:
While the underlying logic remains the same, the renamed elements provide no clues to an attacker about the class's purpose or the functionality of its methods and fields.
Talsec offers a feature to ensure that this basic obfuscation technique is applied and trigger the security threat control if this was skipped at build time.
B) String Obfuscation
String obfuscation focuses on concealing string literals embedded within the application's code. These strings can often reveal sensitive information, such as API keys, Certificates, URLs, error messages, or even business logic. By obfuscating these strings, you prevent attackers from easily extracting valuable insights or identifying critical parts of your application.
Key Concepts
Encoding and Encryption: String obfuscation typically involves encoding or encrypting the string literals within the application.
Runtime Decoding/Decryption: The original strings are reconstructed at runtime, only when they are actually needed by the application.
Increased Analysis Difficulty: Attackers cannot simply search for specific keywords within the decompiled code to uncover sensitive information. They need to understand the obfuscation algorithm and potentially reverse-engineer the decoding/decryption process.
Example
Consider the following code snippet containing an API key:
An attacker examining the decompiled code would see seemingly random strings, requiring them to identify and reverse the Base64 decoding to uncover the actual API key and URL. More sophisticated techniques involving encryption would further complicate this process. Talsec provides a feature to address this need with high level data protection.
C) Control-Flow Obfuscation
Control-flow obfuscation aims to make the application's control flow – the order in which instructions are executed – more complex and difficult to follow. This is achieved by introducing artificial complexity, such as:
Key Concepts
Opaque Predicates: Inserting conditional statements whose outcome is always known at runtime but is difficult for an attacker to determine statically. This creates "dead code" paths that complicate analysis.
Bogus Code Insertion: Injecting code that has no functional impact on the application's behavior but serves to confuse and mislead attackers.
Branching and Jumps: Replacing straightforward sequential execution with a web of conditional and unconditional jumps, making it harder to trace the logical flow.
Control-flow obfuscation might transform this into a more convoluted structure involving opaque predicates and unnecessary jumps, making it harder to understand the simple conditional logic.
Warning: Code Packing and Encryption are Unsuitable for Modern Apps
Code packing and app binary encryption were once popular for protecting app binaries from reverse engineering, typically compressing executables with a runtime unpacking routine.
Today, these techniques are no longer commonly used and may be restricted by app stores. Apple requires disclosures for encryption use, while Google Play flags suspicious packing via Play Protect.
How To Detect Video Injection for KYC
For many KYC (Know Your Customer) vendors, video stream injection is the "final boss" of fraud. It’s the process of bypassing a smartphone’s physical camera sensor to feed pre-recorded or AI-generated deepfakes directly into the application's media pipeline.
If successful, an attacker can register thousands of fraudulent accounts using stolen identities without ever showing their real face.
The good news? Most common injection tools rely on a compromised system state that Talsec RASP for Android and iOS already detects.
How Is Video Injected
Attackers typically use three main vectors:
Hooking: Using LSPosed or VCAM modules to intercept Camera API calls and swap the live feed for a file like virtual.mp4.
Emulators: Running the app in BlueStacks or Nox and using OBS VirtualCam to map a PC video feed as the "phone camera".
Automation: Using the Appium framework to script the entire KYC process, often utilizing plugins that instrument the app to inject images.
The Solution: Talsec's Defensive Mapping
Because these tools require specific "illegal" environments to function, Talsec’s core features act as a multi-layered filter that stops the injection before the camera even opens.
Threat Vector
Talsec Relevant Feature
Why it Works
*This information can be securely evaluated on the customer backend endpoint if Talsec AppiCrypt is used as well for enhanced security
Developer Pro-Tip
To maximize your KYC security, ensure you are utilizing Talsec’s full suite rather than just one module. By closing the door on Root, Hooks, Emulators, and Automation, you effectively neutralize majority of software-based video injection tools used in the wild today without needing complex video analysis processing pipelines.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
freeRASP for Unreal Engine: Secure Your Revenue
Shield your Unreal Engine mobile game with freeRASP, a free and developer-friendly runtime application self-protection solution for Android and iOS.
Next-Gen Security for Your Unreal Engine Game
In today’s competitive mobile market, creating a stunning game with Unreal Engine is only half the battle. Protecting your creation is just as critical - it’s about safeguarding your revenue, preserving player trust, and ensuring a fair and competitive environment. For every developer, from solo indie to established AAA studio, runtime threats like app tampering, emulators, debuggers, and modified clients are significant dangers that can compromise your game’s integrity and impact your revenue:
Revenue Threats
Piracy: unauthorized distribution of paid apps for free
Theft of in-app purchases
Hacking or manipulation of in-game currency
That’s where freeRASP for Unreal Engine (Android + iOS) comes in.
Visit the to get started with installation.
The Missing Piece in Your Unreal Project
Talsec’s freeRASP security solution now integrates directly with Unreal Engine through a dedicated plugin, empowering developers who target Android and iOS. This powerful, early-released plugin serves as the bridge to our robust, real-time threat detection technology, allowing you to connect it to your projects whether you work with C++ or exclusively in Blueprints.
The official Unreal Engine plugin is designed to fit your workflow:
Simple Setup: Get started quickly by adding the plugin to your project’s Plugins folder.
Centralized C++ Configuration: Set up all your security parameters for Android (package names, certificate hashes) and iOS (bundle IDs, team identifiers) directly in your game’s startup code for clean, version-controllable management.
React to Threats Your Way: Use Unreal’s native delegate system to trigger custom logic when a threat is found. Whether you need to flag a player, end a session, or simply log the event, you have complete control.
This makes it simple to react to critical threats such as:
Emulator usage
Rooted or jailbroken devices
Hooking frameworks and debugging attempts
This puts the power back in your hands, allowing you to safeguard your game’s integrity from the most common exploits.
Why it Matters for Game Developers
Unreal Engine empowers you to build breathtaking, high-fidelity worlds. But that very complexity and quality make your game a prime target for malicious actors who use rooted devices, simulators, or modified game clients to gain an unfair advantage, exploit game logic, or steal in-game assets.
freeRASP gives you the armor your game deserves, acting as a silent guardian that protects the intricate systems you’ve worked so hard to build.
What Are You Waiting For?
The first step to a more secure game is right here. Visit the to get started with installation, configuration, and implementing your security logic. Don’t wait for a threat to strike - protect your players and your project today!
Keynote: Discovering the Power of AI Pentesting with Pedro Conde (Ethiack)
The Talsec Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
Why AI Pentesting Now?
Pedro Conde, an AI Scientist at Ethiack specializing in autonomous ethical hacking, delivered a compelling presentation on the power of AI pentesting, outlining three key objectives: to demystify AI pentesting, to demonstrate the current capabilities of these systems, and to emphasize that AI systems are already very capable and "different from human beings".
Conde provided a historical context for the rise of AI pentesting, noting the progression from classical machine learning to deep learning, then to Large Language Models (LLMs), and finally to Agentic AI, which is the category AI pentesting systems fall into. Agentic AI systems often utilize LLMs as a base but possess the ability to interact with the environment, extending beyond simple reasoning, predictions, and generation. These fully autonomous ethical hacking systems, which Ethiack calls "hackbots," can perform a complete pen-testing session, including finding vulnerabilities, without human intervention.
This autonomy offers advantages such as continuous 24/7 testing, high scalability through parallelization, and the ability to dynamically adapt to targets.
How Hackbots Work Under the Hood
Conde detailed the four main building blocks of robust hackbot systems: the 'brains' (multiple interacting LLMs for central reasoning, planning, and decision-making), the 'structure' (providing the skeleton for agents, coordinating them, managing memory, and ensuring efficiency), the 'prompts' (translating human objectives into agent behavior and ensuring goal alignment), and 'tools' (extending the agents' capabilities to interact with the environment, perform actions like running scripts, and validate outputs).
A major limitation of AI systems, especially in pentesting, is 'AI hallucinations,' particularly false positives. Ethiack combats this by using deterministic tools and a specialized 'verifier' agent. The verifier takes a step back to reflect on the hackbot's reasoning, challenges and rechecks conclusions, and filters out weak or flawed inferences, which significantly decreases the false positive rate and increases precision.
Additionally, to prevent destructive behavior, a three-layered guardrail system is used: a prompt-level guardrail shaping model behavior with clear instructions, a deterministic filter for rule-based checks on environmental interactions, and a third-layer LLM agent for contextual judgment on complex cases.
Hackian: Real‑World Demo
The presentation featured a demonstration by Ethiack's hackbot, "Hackian," who shared how it "absolutely demolished" a genetics research platform called Genequest during a Defcon challenge. Hackian achieved a full system compromise in under four hours, finding two critical vulnerabilities, including one that neither human pentesters nor the challenge organizers were aware of.
Hackian first bypassed front-end registration restrictions by hitting the register endpoint directly, mapped the microservices ecosystem, and then exploited a debug endpoint in the DNA analysis service that was vulnerable to command execution.
The second critical bug allowed Hackian to read arbitrary files on the system (like /c/paswd) by sending file paths to the /analyze endpoint, which was using the Closure slurp function without validation. Conde concluded that the core message is not that AI systems are or will be better than humans, but that they are different and find different types of vulnerabilities, sometimes finding "quirks that humans may disregard". Therefore, organizations must test their assets with these systems to prevent "bad guys" from exploiting them.
Currently, Ethiack's hackbot is focused on web applications, though future development may include mobile applications.
Thank you Pedro, , and Hackian for showcasing how agentic AI can transform penetration testing and uncover vulnerabilities that traditional approaches miss. Your work pushes the boundaries of what ethical hacking can achieve and highlights why defenders must start thinking in terms of AI-native offensive capabilities as well!
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Root Detection Best Practices for Developers
For a developer, to effectively implement root detection in your applications, consider the following best practices:
Multi-layered detection: Use several detection techniques together to reduce false positives. Combine file system checks, binary analysis, and behavioral monitoring.
Keep Detection Methods Current: The rooting landscape continually evolves — new rooting methods and hiding techniques emerge (e.g., the shift from SuperSU to Magisk, Magisk’s DenyList replacing MagiskHide, etc.), so, Design your security process to accommodate updates to your root detection. This means periodically reviewing and updating the root indicators you check for, adding checks for novel root tools, and removing checks that are no longer relevant.
Keynote: Fingerprinting, Device Intel & Context with Martin Makarský (Fingerprint)
The Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
Martin Makarský, Director of Engineering at , delivered a keynote shifting the focus from simply collecting data to the critical challenge of interpreting that data in the context of anti-fraud efforts. He emphasized that the real difficulty in anti-fraud is "deciding what it means". The core message is that sophisticated device identification and intelligence must be coupled with human-defined business logic to effectively fight fraud without driving away legitimate users.
Year in Talsec RASP SDK: Highlights from 2025
This year, we confirmed our position as #1 RASP with top root detection. We embraced Kotlin Multiplatform and gaming engines like Unity and Unreal Engine, while finally delivering the long-awaited check completion API. Security was bolstered with new spoofing detections (WiFi, time, location) and highly requested screen leakage protection.
🔐 Root Detection
These rooting tools gave us a hard time, but overcoming them moved us to the first league of RASP root detectors. More in progress.
How to Detect Root on Flutter
Need to secure your Flutter app against rooted devices? Start here.
If your Flutter app runs on a rooted Android phone, attackers can tamper with it, inject malicious code, or bypass security checks. Root detection helps you protect sensitive data and maintain app integrity.
What is rooting?
Rooting removes Android’s built-in restrictions and grants privileged (root) access to the device. With root access, users (or attackers) can:
An exclusive preview of the technology that will define tomorrow's mobile security.
Revolutionizing Mobile Security: AI-Powered Device Risk Summary
For banking applications, fintech platforms, and any app where sensitive operations occur, preventing mobile fraud is critical. The challenge is to implement robust security without creating friction for the user. Imagine preventing fraudulent transfers or account takeovers with a seamless, user-empowering flow. This powerful, dynamic approach is the future of mobile security, moving beyond static defense to an interactive shield that helps, not just hinders.
What if you could not only detect a critical threat on a user's device but also guide them to fix it and complete their action securely, all within moments? At Talsec, we're thrilled to unveil a groundbreaking new capability that does just that. Let's walk you through a real-world scenario to demonstrate the power of our advanced security, culminating in our new AI Device Risk Summary.
Jailbreaking in iOS – while interesting from a device owner’s perspective – poses a serious challenge for mobile app developers concerned about security. A jailbroken device can undermine an app’s protections, opening the door to data breaches, fraud, and intellectual property theft. As we’ve explored, understanding what jailbreaking is and how it’s achieved is the first step. From there, implementing robust iOS jailbreak detection mechanisms is critical for any app that handles sensitive data or transactions.
By using a combination of the techniques discussed – from simple file checks to advanced RASP tools – developers can detect jailbroken devices with a high degree of confidence. The goal isn’t to cat-and-mouse forever, but to make your app a less attractive target. Effective jailbreak detection and response (such as disabling certain features or refusing to run on jailbroken devices) significantly strengthens your app’s security posture. It ensures that the app’s own defenses (encryption, authentication, etc.) haven’t been rendered ineffective by a compromised OS. In essence, jailbreak detection acts as a guardian: if the platform is untrustworthy, the app can take precautions or shut down to prevent further damage.
In practice, a clear mobile app security policy should include jailbreak/root detection alongside other measures like secure communication, code obfuscation, and intrusion detection. Many industries (finance, healthcare, enterprise) now consider jailbreak detection a must-have, and users are often educated that for safety, certain apps won’t run on modified devices. By staying updated on the latest jailbreak developments and using the right tools (for example, integrating services like freeRASP or upgrading to enterprise solutions like RASP+ for more comprehensive protection), developers can keep up in this ever-evolving security battle.
To conclude, effective jailbreak detection strengthens mobile app security by ensuring your application only runs in trusted environments. It protects your app from running under conditions where it could be exploited or misused. For developers, investing time in jailbreak detection and response is well worth it — it’s about safeguarding your users and your business from the risks that come with jailbroken devices. With the knowledge from this guide, you can implement a layered jailbreak protection strategy that makes your app resilient against one of the most prevalent iOS security threats. Secure apps mean safer data and happier users, and that ultimately benefits everyone in the mobile ecosystem.
How Does Jailbreaking Impact Mobile App Security?
When an iOS device is jailbroken, the security model of the OS is fundamentally altered This has several implications for mobile app security:
Untrusted Operating Environment
Apps on a jailbroken phone run in an environment where system integrity can’t be guaranteed. Malicious tweaks or processes could be running with root privileges alongside your app. As a result, your app cannot assume that critical security barriers (like the app sandbox or entitlements) are intact. As noted by security researchers, the presence of a jailbreak means the OS security can no longer be adequately trusted by applications
Elevated Risk of Data Breaches
Jailbreaking removes many of the iOS restrictions that protect user data. For example, an attacker with physical or remote access could read files from your app’s sandbox or Keychain which would normally be protected. Apple warns that jailbreaking eliminates layers of security designed to protect personal information. This could lead to data theft, where hackers steal sensitive information from a jailbroken device
Ease of App Tampering and Reverse Engineering
In a jailbroken device, a user or attacker can hook into the app’s process or modify it at runtime. Tools like Frida, Cycript, or tweaks installed via Cydia/Substrate can intercept function calls or modify an app’s behavior on the fly. This means features like anti-cheat mechanisms, license checks, or cryptographic routines in your app could be bypassed or altered. The barrier to reverse-engineer the app’s code is also lower, since jailbreak users have easier access to the app’s binary and memory. Mobile app security is undermined when attackers can inspect and modify the app freely in a jailbroken environment.
Potential for Malware Injection
Since jailbreaking allows installation of apps from outside Apple’s ecosystem, a jailbroken device may inadvertently run unvetted, malicious software. Such malware could target other apps on the device (including yours) by injecting code or logging keystrokes/API calls. For instance, spyware could attach to a banking app on a jailbroken phone and capture login credentials. This jailbreak-enabled malware is a real threat, and it’s one reason many enterprise or banking apps refuse to run on jailbroken devices as a security precaution.
Delayed iOS Updates and Known Vulnerabilities
Jailbreak enthusiasts often hold off on updating iOS to maintain their jailbreak, since each iOS update may patch the exploit they rely on. this means jailbroken devices are frequently running outdated versions of iOS with unpatched security flaws. From a developer’s perspective, not only is the device compromised by the jailbreak itself, but it may also be vulnerable to known iOS exploits that Apple has already fixed in newer releases. In a corporate environment or any context where device compliance matters, a jailbroken (and likely outdated) device poses a serious risk.
In summary, jailbreaking undermines the security assumptions that iOS apps rely on. Mobile app security defenses like encryption, code signing, and sandboxing can be subverted. This is why many developers implement jailbroken device detection in their apps and may restrict functionality or block usage if a jailbreak is detected. Next, let’s look at how jailbreaking is done and which tools are popular, to better understand what we’re up against.
Jailbreak Detection
Understand what jailbreak detection is, why jailbroken iOS devices put sensitive apps at risk, and how developers can detect compromised devices and strengthen mobile app security.
Jailbreak detection on iOS is a critical security measure for apps that handle sensitive data or enforce strict compliance requirements. Jailbreaking removes Apple's built-in security restrictions, allowing users to gain root access and modify system files. While this can be beneficial for customization and advanced usage, it also opens the door to security threats such as malware, unauthorized modifications, and bypassing in-app protections. Developers implement jailbreak detection to identify compromised devices and take appropriate actions, such as restricting access or triggering security alerts, to safeguard user data and prevent fraud.
However, detecting a jailbreak is increasingly challenging due to sophisticated evasion techniques. Tools like Liberty Lite and Shadow allow users to hide jailbreak status from security checks, making traditional detection methods less effective. To combat this, modern approaches rely on a combination of runtime integrity checks, file system analysis, and behavior-based monitoring. While no detection method is foolproof, a layered security strategy helps increase resilience against tampering. Ultimately, jailbreak detection is just one piece of a comprehensive mobile security framework that includes app hardening, runtime protections, and continuous monitoring to mitigate risks effectively.
Why should developers and organizations care about jailbreaking?
A jailbroken device can introduce significant security vulnerabilities. Attackers or malicious tools could exploit the elevated access to read or modify sensitive data in your app. Jailbreaking essentially lowers the iOS defenses, making it easier for malware, spyware, or unauthorized scripts to run. For example, banking and payment apps risk exposure of private keys or customer data if the device is compromised. Even if the user’s intent is benign, a jailbroken environment creates uncertainty. Many mobile app security standards recommend blocking or at least detecting jailbroken devices to protect both the user and the service. In the next sections, we will explore how jailbreaking impacts app security, what techniques are used to jailbreak iPhones, and how developers can implement iOS jailbreak detection and jailbreak protection in their apps.
How does an app “detect” hooking?
There are a few approaches:
• The app can check its own integrity and environment at runtime. If something doesn’t look as expected (for example, a critical function’s code has been altered in memory, or an unexpected library is loaded into the app’s process), the app might suspect a hook.
• It can also look for known footprints of hooking frameworks. Many hooking tools leave telltale signs (specific file names, process names, or injected code patterns) that can be recognized. For instance, if a well-known hooking tool is attached, the app might notice unusual debug connections or the presence of classes and methods that only exist when a framework like Xposed or Frida is in use.
• Hook detection often goes hand-in-hand with root detection or jailbreak detection. Since hooking typically requires elevated privileges, an app that finds a device is rooted/jailbroken will treat it as a higher-risk environment and may assume a hooking attack is possible. Some apps refuse to run in such cases or operate in a limited mode.
In essence, hook detection is any check or safeguard that allows an app to sense “I’m being watched or controlled by someone else’s code right now.” Once detected, the app can then respond (for example, by shutting down, disabling sensitive features, or alerting the user).
Ad stripping: removal of ads that generate developer revenue
// examples:
void AFreeRASPPlayerController::HandleSecurityThreat(ThreatType ThreatType)
{
UE_LOG(LogTemp, Warning, TEXT("Security threat detected: %d"), ThreatType);
switch (ThreatType) {
case ThreatType::OnPrivilegedAccess:
UE_LOG(LogTemp, Warning, TEXT("Privileged access threat detected"));
break;
case ThreatType::OnAppIntegrity:
UE_LOG(LogTemp, Warning, TEXT("App integrity threat detected"));
break;
case ThreatType::OnDebug:
UE_LOG(LogTemp, Warning, TEXT("Debug threat detected"));
break;
case ThreatType::OnSimulator:
UE_LOG(LogTemp, Warning, TEXT("Simulator threat detected"));
break;
case ThreatType::OnUnofficialStore:
UE_LOG(LogTemp, Warning, TEXT("Unofficial store threat detected"));
break;
case ThreatType::OnHookDetected:
UE_LOG(LogTemp, Warning, TEXT("Hook threat detected"));
break;
case ThreatType::OnObfuscationIssues:
UE_LOG(LogTemp, Warning, TEXT("Obfuscation issues threat detected"));
break;
case ThreatType::OnScreenshot:
UE_LOG(LogTemp, Warning, TEXT("Screenshot threat detected"));
break;
case ThreatType::OnScreenRecording:
UE_LOG(LogTemp, Warning, TEXT("Screen recording threat detected"));
break;
UE_LOG(LogTemp, Warning, TEXT("Dev mode threat detected"));
break;
case ThreatType::OnADBEnabled:
UE_LOG(LogTemp, Warning, TEXT("ADB enabled threat detected"));
break;
case ThreatType::OnSystemVPN:
UE_log(LogTemp, Warning, TEXT("System VPN threat detected"));
break;
...
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Generic Applicability: This technique is indispensable and straightforward, as it is a complimentary compiler feature that yields a significant security advantage.
Exception Handling Abuse: Using exception handling mechanisms in non-standard ways to alter the control flow.
State Machine Transformation: Converting linear code sections into complex state machines, obscuring the original logic.
Their decline is largely due to widespread misuse by malware and incompatibility with modern app distribution policies.
Automation Detection
Appium leaves traces in the uiautomator service and often requires ADB/Developer Options to be enabled, both of which Talsec detects.
Repackaged Testing Builds
App Integrity Checks
Attackers sometimes re-sign the APK to disable security for automation. Talsec’s signature and binary integrity checks prevent these modified builds from running.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
LSPosed with VCAM Module
Root & Hook Detection
VCAM requires a rooted device (Magisk) and an active hooking framework (LSPosed/Frida) to function. Talsec can kill the session the moment it sees these artifacts.
Emulators (BlueStacks) (+ OBS)
Emulator Detection
Injections via OBS happen at the virtualization layer. Talsec detects common emulators and can block the app entirely.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Integration with app logic: For critical apps (like financial services), integrate root detection functionality to run continuously while the app is open to make sure that the user is not rooted all the time during his usage of the App.
Avoid Hardcoding and Obvious Logic: If your detection logic is too rigid or all in one place, attackers can figure it out by decompiling your APK. Don’t hardcode file names or root indicators in plaintext if you can avoid it — attackers might search the APK for strings like “/system/xbin/su” and simply modify your code to skip that check. Instead, consider computing values at runtime (e.g., assemble file paths or property names dynamically) so they’re not plainly visible in code. Leverage code obfuscation tools (like ProGuard/R8) to rename classes and methods related to security checks
Pro's and Con's of Popular Root Detector Solutions (free and paid)
Choose the root detection solution that aligns with your goals. Free tools like RootBeer, freeRASP, or Play Integrity provide basic protection — but premium offerings like Talsec RASP+ bring robust features and peace of mind.
Root Detection Solution
Pros
Cons
(free, open-source, in-app, used by 5000+ apps)
Open-source library with simple integration
Checks for common root indicators
Easily bypassed by tools like UnRootBeer or custom kernels
Relies on predefined threat lists, missing newer root methods
Prone to false positives
(free, reliable, in-app, used by 6000+ apps)
Actively maintained with frequent updates
Detects root/jailbreak indicators and common hiding tools (Magisk/Shamiko)
Lightweight integration
Less resilient to bypass compared to paid (binary not app-bound)
Adds 4 MB to app size
Sends threat data to Talsec-managed servers by default
Who Decides? Fingerprinting, Device Intelligence, and Context in Fighting Fraud
Effective anti-fraud systems depend not only on collecting data but on interpreting that data correctly. The central challenge lies in deciding what collected signals actually mean. Advanced device identification and intelligence must work alongside clearly defined business logic to combat fraud effectively without alienating legitimate users.
Key Concepts in Anti-Fraud Systems
Several foundational technologies shape modern anti-fraud strategies:
Fingerprinting: Fingerprinting focuses on identifying environments rather than tracking individuals. It combines ordinary system signals that are insignificant on their own but together uniquely describe an environment and form a stable signature. This enables recognition of the same browser even after cookies are cleared. Modern fingerprinting systems apply machine learning to hundreds of signals, assigning higher weight to stable attributes such as GPU or installed fonts and lower weight to volatile signals like time zone, producing a confidence-based similarity score.
Device or Browser Identifier: Most anti-fraud systems rely on a stable identifier derived from fingerprinting. This identifier helps distinguish returning trusted users from first-time visitors or potentially risky anomalies, reducing unnecessary friction for known users.
Device Intelligence: Device intelligence does not identify who a user is, but rather what type of environment they are using. It transforms raw signals into actionable context, revealing intent-related indicators such as VPN usage inconsistent with local time zones or sessions running in incognito mode.
The Scale of the Fraud Problem
Fraud has reached a scale that makes advanced identification and intelligence unavoidable:
Online payment fraud is projected to cause approximately $350 billion in global losses between 2023 and 2027.
Card fraud alone accounts for roughly $50 billion in losses each year.
A single aggregated breach database contains 16 billion compromised login credentials—nearly twice the global population.
83% of organizations report experiencing at least one account takeover attack.
The Friction–Trust Tradeoff
A common reaction to rising fraud is to increase friction through additional challenges such as two-factor authentication. However, excessive friction leads to user fatigue and abandonment. Device intelligence enables a more balanced approach by allowing systems to trust users with confidence.
This approach relies on adaptive security:
When device, environment, and behavior align with known trusted patterns—such as consistent device usage, location, and activity—no interruption is necessary.
When signals change unexpectedly, such as a new device, operating system, or identifier, systems can introduce proportional friction. This may include step-up authentication, temporary restrictions on high-risk actions, or escalation for manual review.
Who Makes the Decision?
Balancing security and user experience raises a critical question: should algorithms make trust decisions automatically, or should humans define the rules?
Fully automated, out-of-the-box decisions (such as blocking all VPN traffic or rooted devices) fail because fraud is inherently contextual. A VPN may indicate abuse for one business and privacy-conscious behavior for another. A rooted device may represent a serious risk in mobile banking but a legitimate testing environment for quality assurance.
Universal models cannot account for a company’s specific users, business model, or risk tolerance. Effective systems should inform decisions rather than dictate them. Humans must define acceptable risk and friction, while algorithms enforce those decisions at scale.
Architecture for Adaptive Anti-Fraud
Robust anti-fraud systems operate across multiple layers:
Device Layer: Ensures runtime integrity by verifying that the application or browser environment has not been tampered with.
Edge Layer: Enables rapid responses through CDNs or firewalls, allowing immediate blocking or challenging of suspicious activity without code changes, deployments, or infrastructure updates.
Back-End Layer: Applies contextual interpretation and enforces decisions based on business logic and risk models.
This layered approach enables rapid reaction to emerging fraud patterns. For example, a rule deployed at the edge can instantly block sign-up attempts originating from incognito sessions, reducing response time from hours to seconds.
While systems can score risk in real time, defining trust, acceptable risk, and user friction remains a human responsibility.
Thank you Martin and Fingerprint for sharing clear and practical insights into how fingerprinting, device intelligence, and contextual decision-making shape effective anti-fraud strategies.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
This article delves into the concept of obfuscation, explore its different types, and articulate Talsec's philosophy on its application. We believe in a balanced and pragmatic approach, prioritizing security without compromising performance.
We have introduced new Starter plans that make it easier for teams to move from testing to commercial launch with predictable pricing and built‑in SLAs. The RASP+ Starter plan offers advanced in‑app protection with a Bronze SLA for apps up to 10K downloads, while the Full App Safety Suite Starter combines RASP+, Hardening, AppiCrypt, and Anti‑Malware into a single package for early‑stage production deployments.
Together with the existing freeRASP tier for exploration and low‑medium value apps, these new plans give companies a clear upgrade path from freeRASP to full‑scale protection.https://www.talsec.app/#plans
Thank You for an Incredible 2025!
We want to extend our heartfelt thanks to the professional and freeRASP community for helping us make a difference in mobile security. To our fans, colleagues who work tirelessly to support everyone, and our adopters and supporter -your dedication inspires us every day. We don’t claim to have all the answers, but by partnering with industry experts and sharing knowledge, we continue to grow together. A strong, united community drives us forward and fuels our commitment to giving back to the broader mobile security ecosystem. Here’s to an even more secure and innovative 2026 - may it bring success, collaboration, and growth for us all!
Modify your app’s code or memory.
Inject malicious libraries using tools like Magisk or Xposed.
Bypass key protections such as SSL pinning.
It’s like removing the lock from your front door — anyone can walk in, change things, or steal information.
How common is rooting?
About 0.03% of Android devices are rooted. That may sound small, but at global scale it still means millions of devices. If your app handles sensitive data, you can’t ignore this risk.
Attackers use advanced tools like Magisk and Shamiko to hide root access. Simple checks like
Detection of suspicious binaries
Detection of suspicious processes
Check for elevated permissions
may catch older roots, but they quickly become outdated. Building your own detection logic is time-consuming and hard to maintain. Basic techniques involve.
While building your own solution offers control, it’s not recommended due to the time, effort, and expertise required to keep up. A better option is to use an actively maintained SDK that evolves with new attack methods.
DIY Coding Guide
You can implement yourself simple root detection like this:
Use freeRASP (free library by Talsec)
With freeRASP, the root detection utilizes hundreds of advanced checks, offering robust detection even with hiding methods applied.
Strong detection (including Magisk 29+, Hide My Applist and Shamiko).
Used by 6000+ apps; #1 Mobile RASP SDK by popularity ()
Integration example
Add the in your project, focus on implementing the following callback:
Key Takeaway
Rooted devices grant attackers privileged access, allowing them to tamper with apps, inject malicious code, or bypass critical protections like SSL pinning. Detection doesn’t have to be DIY or error-prone—simple checks for su binaries or elevated permissions are easily bypassed. Tools like freeRASP provide reliable, continuously updated detection with hundreds of advanced checks, letting you respond proactively to root threats and maintain app integrity.
👉 If you want root detection plus jailbreak, Frida, emulator, debugging, screenshot, and tampering protection in one free package, start with .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
The Scenario
Imagine a user is excited to buy a new pair of shoes from your e-commerce mobile application. They have found the perfect pair, added them to the cart, and are ready to check out. However, there's a hidden problem: their device is infected with malware.
Step 1: The Initial Purchase Attempt
The user proceeds to the payment page, ready to enter their card details. Instantly, Talsec's in-app security kicks in as an automatic background security step-up.
Step 2: Intelligent Security Check and Risk Assessment
Here's where the magic begins. While the input fields for the user's sensitive card information are temporarily disabled, Talsec performs a comprehensive device security scan in the background. The result? A high-risk score is returned, confirming that critical threats are present on the device, making it unsafe to proceed with an EMV payment transaction.
Step 3: AI-Powered Risk Summary and Crystal-Clear Guidance
This is the game-changer. Rather than leaving the user confused and likely to abandon their cart, your app now presents them with Talsec's AI Device Risk Summary. This user-friendly interface clearly explains the problem.
Threat Report: A concise report informs the user that a specific threat has been found. In this case, it’s dangerous "SMS Forwarder" malware – a type of spyware that can intercept one-time passwords and other sensitive information sent via text message.
Remediation Steps: The summary provides simple, actionable guidance, instructing the user on how to locate and uninstall the malicious application from their device.
Step 4: From Threat to Trust
The user follows the straightforward instructions and successfully removes the SMS Forwarder malware. They are now confident that their device is clean and their information is safe.
Step 5: A Secure Second Attempt and a Successful Transaction
The user returns to your app and attempts the purchase again. This time, the Talsec security check runs and delivers a vastly improved, low-risk score. The system recognizes that the threat has been neutralized.
The sensitive card detail fields are now enabled. The user confidently enters their payment information, completes the EMV transaction, and their purchase is successful.
From Friction to Empowerment
This entire process turns a potentially disastrous security incident into a positive user experience. You have not just prevented fraud; you have empowered your user to secure their own device, building trust and loyalty in your brand.
Would you like to protect your app with our new AI Device Risk Summary and transform your users' security experience?
Contact us to get more information about this awesome new feature! Visit https://talsec.app and request a demo.
Keynote: Raising the Bar with Software Protection with Béatrice Creusillet (Quarkslab)
The Talsec Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
The EV Charger Hacking Case Study
Béatrice Creusillet, the R&D lead for the product division at Quarkslab, delivered a keynote on the critical need for software protection, illustrating her points with a real-world hacking story involving an electric vehicle (EV) charger. While absolute protection is impossible, layering defense mechanisms like obfuscation and integrity checks drastically raises the cost and expertise required for successful attacks.
Overview of the Attack
The presentation began with a story of three Quarkslab engineers participating in the Pwn2Own bug bounty competition. Target was a "Hotel Maxi Charger AC wallbox," an electric vehicle supply equipment (EVSE) or charger used for residential and commercial purposes.
The device is highly connected and supports USB, Ethernet, Bluetooth, Wi-Fi, and NFC communication. It is managed through a companion Android application that interfaces with both the charger and the vendor’s cloud infrastructure. The presentation examines how limited software protections significantly reduced the cost and complexity of the attack.
The attack is conducted in four distinct phases and requires approximately 33 person-days of effort. Both the Android application and the device firmware implement only minimal protection mechanisms, consisting primarily of application packing and light encryption.
Firmware Retrieval
The engineers could not extract the firmware directly from the charger, so they retrieved it via the Android companion application. The application was packed, requiring several days of dynamic analysis to unpack and retrieve the full application code. They then used static analysis to find the URL and tokens needed to download the encrypted firmware from the cloud.
Decryption
The encrypted firmware was "lightly encrypted". It took a cryptanalyst three days to decrypt it, using expertise to make "some lucky guesses".
Analysis and Vulnerability Finding
The engineers discovered the firmware was unprotected; it was not obfuscated and lacked common mitigations like ASLR and stack protection. Although the code was "strict" (no symbols), they identified the operating system as a freeRTOS and located the Bluetooth and USB stacks. This analysis led to the discovery of three vulnerabilities, which took 20 person-days.
Exploitation
Because there were "no protections," the team easily developed two exploitation chains leveraging Bluetooth and USB. They even made the chains persistent across future firmware updates by implanting them in the bootloader.
While the attack ultimately failed at Pwn2Own because they based their work on the European version of the firmware, not the US version used in the competition, the vulnerabilities were reported and fixed by the vendor.
The successful exploit could have allowed free charging, damaged the vehicle or battery, or provided access to the home/company network, nearby Bluetooth devices, or the cloud-based vendor backend.
Impact of Layered Protection
The presentation uses the attack timeline to demonstrate how even basic software protection mechanisms significantly increase the time, cost, and expertise required to compromise a system. The absence of layered defenses allows attackers to progress rapidly from firmware acquisition to full exploitation.
Protection Level
Estimated Attack Time (Person-Days)
Resulting Difficulty
Key Lessons Learned
The primary lesson from this case study is the critical importance of protecting IoT devices and their companion applications. Companion applications frequently represent the weakest link, as they expose device logic, credentials, and cloud interfaces, and often provide a direct path into private networks and backend systems.
Core Concepts in Software Protection
The objective of application protection is to preserve application behavior, ensure operational safety, and protect company revenue. This requires safeguarding sensitive assets such as credentials, cryptographic keys, configuration parameters, and proprietary algorithms.
Software protection is built on two fundamental properties:
Integrity, which ensures that the application has not been modified or tampered with. Common techniques include code integrity checks and runtime application self-protection mechanisms.
Confidentiality, which aims to conceal application logic and sensitive data through methods such as code obfuscation and white-box cryptography.
Obfuscation increases the difficulty of reverse engineering by hiding program structure, transforming control flow, and concealing constants and strings. An additional benefit of obfuscation is diversification. By protecting different builds or instances differently, an exploit developed for one binary may not apply to another, buying valuable time for remediation.
No technique alone is enough. It is important to combine and layer both software and hardware protection mechanisms. Obfuscation, runtime protection, integrity verification, and encryption must be used together.
Thank you, Béatrice, , for showcasing how thoughtful research and practical insight can bridge the gap between theory and real-world security challenges. Your work highlights the importance of clear thinking, strong fundamentals, and curiosity-driven exploration in advancing modern cybersecurity.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Keynote: Red Teaming in Practice with Adam Žilla (Haxoris)
The Talsec Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
True Story of a Real-World Attack Simulation
Adam Žila, an ethical hacker, red teamer, and cybersecurity enthusiast, shared a true story of a red team operation targeting a large organization with thousands of employees that invested heavily in security, processes, and its people. The organization wanted to know how far a motivated and well-organized group of attackers could get if they were specifically targeted. The operation was a real-world attack simulation without limitations or restricted scenarios, designed to uncover technical weaknesses, test the human factor, and examine the physical perimeter.
The Red Team Operation Phases
The red team conducts the operation in a series of structured phases. While the specific techniques vary depending on the environment, the overall methodology reflects common real-world attack progression.
Open-Source Intelligence Collection
The engagement begins with open-source intelligence collection. The team gathers publicly available information related to the organization, including IP address ranges, domains and subdomains, employee locations, and the internal email address format. This information supports later technical and social engineering activities.
Virtual Perimeter Testing
The team assesses the organization’s external attack surface by scanning public-facing infrastructure, including exposed IP ranges, open ports, and running services. The assessment focuses on identifying known vulnerabilities, misconfigurations, exploitable CVEs, or outdated software.
The external perimeter proves to be well hardened. No publicly exploitable vulnerabilities are identified, and all attempts to compromise web applications and VPN gateways are unsuccessful. These findings indicate a high level of maturity in perimeter security management and patching practices.
Wireless Network Assessment
The assessment continues on-site with an evaluation of the wireless network. The organization uses WPA2 Enterprise with client certificate authentication, which prevents handshake capture and offline password cracking. All access points are fully patched, effectively mitigating rogue access point and deauthentication attacks.
The team identifies no viable wireless attack paths.
Physical Security Assessment and Rogue Device Deployment
After unsuccessful virtual and wireless attacks, the team shifts focus to physical security. Using previously collected OSINT, the team identifies a legitimate fire safety company under contract with the organization and constructs a credible false identity based on this relationship.
The team prepares branded work attire and supporting documentation to reinforce the pretext. Upon arrival, reception personnel verify the cover story and grant escorted access to the building. Once supervision lapses, the team remains unsupervised in a meeting room.
An active RJ45 network socket is discovered behind a television. The team deploys a rogue Raspberry Pi device equipped with LTE connectivity, establishing persistent remote access to the internal network. The deployment remains undetected by the organization.
Internal Phishing and Credential Acquisition
To progress further, the team requires valid domain credentials. From the implanted device, the team launches an internal phishing campaign. A legitimate internal portal is cloned and hosted on a visually similar domain that uses a subtle typographical variation.
The campaign successfully captures valid domain credentials. After verifying the credentials, the team dismantles the phishing infrastructure to reduce the likelihood of detection.
Active Directory Mapping and Domain Controller Takeover
With internal access and valid credentials, the team enumerates the Active Directory environment. Enumeration is performed manually using LDAP queries designed to resemble normal directory traffic, minimizing the risk of triggering alerts from firewalls or IDS and IPS systems.
The team identifies user and group structures and confirms the presence of an internal certificate authority.
Exploiting the Certificate Authority
Based on prior experience, the team recognizes the certificate authority as a high-value target. Using Certipy, the team enumerates the certificate authority and identifies vulnerability ESC8.
To exploit this weakness, the team configures NLM Relay X as a listener and uses NetExec with the Coerce module to trigger authentication from the domain controller. This process yields authentication material that allows the team to request a valid certificate issued in the name of the domain controller.
Domain Compromise
Possession of a valid domain controller certificate effectively grants full control over the Active Directory domain. With this level of access, the team can request ticket-granting tickets or perform DCSync operations to retrieve credential hashes for all domain accounts and computers.
This stage represents complete domain compromise.
Key Recommendations from the Operation
Adam Žilla offered three main recommendations based on this red team operation:
1
Perimeter Security is Not Enough
Even the best perimeter security, like a firewall, is insufficient if an attacker can get inside the network. The speaker noted that all efforts to protect only the perimeter may be in vain. The success of an attacker is often a matter of motivation, and a well-organized group with no limitations in budget or time will eventually get inside, whether through phishing, social engineering, or a zero-day exploit in applications or a firewall.
2
The operation serves as proof that real attacks do not follow the rules of a standard penetration test, and attackers will simply get in, regardless of legal compliance. The surprising lack of security awareness inside the network, illustrated by the flat network found after bypassing the hardened external perimeter, highlights a common problem: the old habit of believing a firewall alone provides complete safety.
Thank you Adam and for insightful talk and valuable recommendations during the red team operation. Your expertise in cybersecurity and practical advice on enhancing defenses have been invaluable.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
How to Detect Root on React Native
Need to secure your React Native app against rooted devices? Start here.
If you are deploying a React Native app on Android, you will inevitably encounter a formidable challenge: the rooted device. Root access represents a significant security vulnerability that demands a robust response. Ignoring this threat is not an option, particularly if your application handles sensitive data or performs critical business functions.
What is Root?
Rooting removes Android’s built-in restrictions and grants privileged (root) access to the device. With root access, users (or attackers) can:
Modify your app’s code or memory.
Inject malicious libraries using tools like Magisk or Xposed.
Bypass key protections such as SSL pinning.
It’s like removing the lock from your front door — anyone can walk in, change things, or steal information.
How common is rooting?
About 0.03% of Android devices are rooted. That may sound small, but at global scale it still means millions of devices. If your app handles sensitive data, you can’t ignore this risk.
How to Detect Rooted Device?
There are simple checks like:
Detection of suspicious binaries
Detection of suspicious processes
Check for elevated permissions
These may catch older roots, but they quickly become outdated. Building your own detection logic is time-consuming and hard to maintain. Basic techniques involve.
While building your own solution offers control, it’s not recommended due to the time, effort, and expertise required to keep up. A better option is to use an actively maintained SDK that evolves with new attack methods.
DIY Coding Guide
You can implement yourself simple root detection like this:
Use freeRASP (free library by Talsec)
With freeRASP, the root detection utilizes hundreds of advanced checks, offering robust detection even with hiding methods applied.
Strong detection (including Magisk 29+, Hide My Applist and Shamiko).
Add the in your project, focus on implementing the following callback:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Rooted devices grant attackers privileged access, allowing them to tamper with apps, inject malicious code, or bypass critical protections like SSL pinning. Detection doesn’t have to be DIY or error-prone—simple checks for su binaries or elevated permissions are easily bypassed. Tools like freeRASP provide reliable, continuously updated detection with hundreds of advanced checks, letting you respond proactively to root threats and maintain app integrity.
👉 If you want root detection plus jailbreak, Frida, emulator, debugging, screenshot, and tampering protection in one free package, start with .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Achieving Cloudflare Outage Resilience using AppiCryptWeb
Cryptographic Runtime Attestation for Web Applications.
When the November 18, 2025, Cloudflare outage took down parts of the internet, many teams realized that a single vendor controlled both their content delivery and critical API security. If you are looking for a Cloudflare alternative or a way to harden your stack after “Cloudflare down” incident, the strategic move is to decouple your CDN from API and bot protection, rather than simply swapping one edge provider for another.
The Problem: Cloudflare Single Point of Failure
Cloudflare’s post-mortemshows the outage was triggered by a bot-management configuration file that suddenly doubled in size, exceeded limits, and caused core proxy software to fail for traffic that depended on that module. Because this bot-management logic is embedded in the same global proxy that delivers websites and APIs, a single latent bug propagated to edge servers worldwide and resulted in HTTP 5xx errors for many customers at once.
High-profile services like X, ChatGPT, Shopify, and several transit and financial platforms were reported as partially or fully unavailable during the incident, even though many of their origin servers were fine. For anyone running revenue-critical APIs, this is a painful reminder that and being tightly coupled with effectively turns one vendor into a systemic single point of failure.
The Strategy: Decoupling API Security
Instead of asking only “What is the best Cloudflare alternative?”, a better question after this outage is “How do we make API security and bot mitigation portable across CDNs and gateways?” The idea is to treat the CDN purely as a transport and caching layer, while API security and fraud prevention are enforced by your own logic and keys, which you can run on any edge platform.
This is where AppiCryptWeb comes in.
The Solution: AppiCryptWeb
AppiCryptWeb provides a cryptographic, request-level trust signal that is independent of any particular CDN’s internal bot rules, and can be evaluated the same way on NGINX, Cloudflare Workers, AWS API Gateway, or other gateways. That means you can build a multi-CDN or multi-WAF architecture without losing a consistent security posture when traffic fails over away from Cloudflare during an outage.
What AppiCryptWeb Actually Does
AppiCryptWeb is a WebAssembly-based, in-browser security agent that attaches a signed, encrypted cryptogram to every protected API request. Each cryptogram encodes:
The integrity of the browser runtime
A privacy-preserving device identity
A caller-provided nonce
On the backend or at the edge, a lightweight validator decrypts the cryptogram using your keys, checks signatures and freshness, verifies the nonce binding, and returns a detailed risk assessment that your gateway or application can use to allow, challenge, or block the request. Because this verification logic lives in your infrastructure and uses your cryptographic material, it is not subject to Cloudflare’s internal bot-management pipelines or configuration rollouts
to see cryptograms in action.
Why this is a Cloudflare alternative at the API layer
AppiCryptWeb does not try to replace Cloudflare’s global network or DDoS layer, but it does act as a strong Cloudflare alternative for API and bot protection by changing where trust decisions are made. Several aspects are particularly relevant:
Per-request cryptographic proof
Every API call carries its own cryptogram, providing request-level assurance about runtime integrity, device identity, and threat state, rather than relying only on IP reputation or heuristic scoring at the edge.
Replay resistance via nonce binding
Each cryptogram is tied to a unique nonce and timestamp, providing API-level protection against replay and cross-request forgery attacks.
Privacy-preserving browser identity
AppiCryptWeb provides a stable browser fingerprint that does not rely on cookies or invasive tracking, helping distinguish genuine users from scripted bots in a privacy-respecting way.
Fraud-focused signals instead of generic heuristics
The cryptogram explicitly encodes signals relevant to fraud prevention and business logic abuse, giving your backend concrete data points to protect high-value flows.
Works across multi-CDN and multi-WAF setups
Because the validation step is portable, you can run the same enforcement logic on Cloudflare Workers, NGINX, AWS API Gateway, or other providers without changing how cryptograms are generated.
Customer-owned keys and policy
Signatures are verified on your backend or edge using a key ID (KID) that you control, shifting the blast radius of configuration mistakes away from a global outage.
Example architecture after the outage
If the goal is “Cloudflare outage resilience” plus better API security, a practical pattern looks like this:
1
Users hit https://app.example.com behind Cloudflare, Akamai, or any other CDN.
2
The app includes the AppiCryptWeb agent and uses it to attach a cryptogram to all sensitive API calls, bound to a nonce derived from the request body.
3
This decouples “who gives us packets” (CDN) from “who decides whether the browser is genuine and allowed to touch this API” (your cryptographic runtime-attestation layer).
How you would integrate AppiCryptWeb
Integration is intentionally minimal:
Frontend
Load a small file, initialize the agent once per app load, and call a function to fetch a cryptogram before sending sensitive API requests.
Backend/Edge
Add a single validation step that decrypts the cryptogram, checks nonce and timestamps, interprets threat flags, and then either forwards the request or responds with 401/403.
This pattern maps cleanly onto familiar infrastructure like NGINX with an auth sub-request, a Cloudflare Worker, or an AWS Lambda authorizer for API Gateway.When this makes sense after the outage
Key Takeaway
AppiCryptWeb is designed for organizations that run high-value web APIs and care about protection against bots, scraping, fraud, and business-logic abuse, while also wanting to reduce concentration risk around a single edge vendor.
The key conceptual shift is to:
Keep using Cloudflare (or another CDN) for global anycast, caching, TLS termination, and network-layer DDoS.
Move API trust and authorization into a cryptographic runtime-attestation layer that you own and can run anywhere.
The product page from Talsec goes deeper into the security model, deployment options, and comparison with API keys and CAPTCHAs
Written by Efim Goncharuk — Talsec AppiCryptWeb Architect
How to Detect Hooking (Frida) using Kotlin
Stop runtime attacks before they hijack your Android app.
Hooking frameworks like Frida and Xposed are increasingly popular among attackers trying to manipulate Android apps. From bypassing in-app purchases to stealing sensitive data, hooking is a serious risk.
What is Hooking?
Hooking is the process of intercepting and modifying function calls at runtime. Attackers use frameworks such as Frida, Xposed, or LSPosed to inject custom code into your app. Popular attacker toolkits include:
– runs on the device and enables full dynamic instrumentation
– built on Frida, commonly used for bypassing SSL pinning and root detection
This allows attackers to:
Bypass license checks or payments
Steal API keys and user credentials
Alter logic to gain unfair advantages in apps (e.g., games, banking apps)
Check out and for industry leading hook detection.
How to Detect Hooking?
Detecting hooking is tricky because frameworks evolve fast and attackers hide their tracks. DIY detection (like searching for suspicious processes or libraries) often fails against advanced obfuscation. That’s why there are many solutions which already provides high level of detection:
(by Talsec) – battle-tested detection of Frida, Xposed, Magisk, and more
These solutions keep pace with new bypass techniques, saving you the burden of chasing attackers.
DIY Coding Guide
You can implement yourself simple Frida server detection. Frida often uses ports like 27042 and 27043.
Use freeRASP (free library by Talsec)
With freeRASP, the hook detection utilizes hundreds of advanced checks, offering robust detection even with bypass scripts applied.
Robust Kotlin SDK that detects Frida, Xposed, Magisk, root, tampering, and more
Actively updated, trusted by 6000+ apps worldwide
Simple integration with callbacks for security events
Integration Example
Add the in your project, focus on implementing the following callback:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Hooking is one of the most dangerous runtime threats to Android apps. Attackers armed with Frida, objection, or frida-trace can change your app’s behavior in seconds—but you can fight back. With freeRASP by Talsec, Kotlin developers get a reliable, lightweight, and continuously updated way to detect and stop hooking attacks before they cause damage.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
How to Detect Hooking (Frida) using Swift
Protect your iOS app from runtime manipulation with Frida detection.
iOS apps are prime targets for hooking frameworks like Frida. Attackers use them to bypass protections, steal data, or alter app logic. Fortunately, SDKs like freeRASP by Talsec give Swift developers a simple way to detect and stop these attacks.
What is Hooking?
Hooking is when attackers intercept and modify method calls at runtime. On iOS, this is often achieved through:
Frida-server – enabling dynamic instrumentation on jailbroken devices
Objection – built on Frida, frequently used to bypass SSL pinning or jailbreak detection
frida-trace – helps attackers log and manipulate API calls
With these tools, attackers can:
Bypass payment checks or subscriptions
Steal credentials, tokens, or API keys
Inject malicious logic into sensitive apps (banking, healthcare, messaging)
Imagine someone secretly attaching a device to your phone line—every call you make could be recorded, redirected, or modified in real time. That’s how hooking works in your app.
Check out and for industry leading hook detection.
How to Detect Hooking?
Detection on iOS is complex. Frida developers continuously update their frameworks to evade naive checks. Simple DIY solutions like searching for frida-server processes or suspicious ports often fail. That’s why expert SDKs are the safer choice:
These SDKs evolve alongside attacker techniques, giving you peace of mind.
freeRASP (by Talsec)
to detect jailbreak, debugger, tampering, and hook attempts
Comes with like root, debugger, hooking (Frida, Xposed), emulators, and more
Trusted by
Swift Example:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
On iOS, attackers equipped with Frida, objection, or frida-trace can hijack your app’s logic at runtime. DIY detection is fragile—serious apps need serious protection. With freeRASP by Talsec, Swift developers get a lightweight, continuously updated SDK to block hooking and keep users safe.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Introduction
Discover curated AppSec articles, guides, and research on mobile app and API security, covering rooting, hooking, Flutter security, RASP, AppiCrypt, and practical threat detection techniques.
Featured AppSec Collections
Mobile and API Threat Detection & Defense (Rooting, Hooking, Reverse Engineering)
How to Prevent Magisk Root Hiding and Security Bypass
In the world of mobile app security, the cat-and-mouse game between developers and malicious actors is relentless. One of the most significant threats is the rooted device. It gives users privileged control over the operating system. This access can be used to bypass security controls, tamper with app data, and reverse-engineer your application.
The most sophisticated tool in this arena is , a "systemless" rooting utility. One of it's most potent feature is Magisk Hide. It is specifically designed to hide the rooted status of a device from detection.
This article explains why simple root detection methods fail against Magisk and presents a robust, professional solution to secure your application.
How to Detect VPN using Swift
Struggling to protect your app from hidden network traffic? Here’s how to fight back.
VPNs are widely used for privacy, but they can also be exploited to bypass geo-restrictions, manipulate in-app content, or hide fraudulent activity. Detecting VPN usage in your iOS app is challenging, but there are solution whichmake it practical and reliable.
What is VPN?
A VPN (Virtual Private Network) encrypts traffic and routes it through remote servers. While this protects privacy, it can also help attackers:
How to Detect App Tampering & Repackaging using Kotlin
Don’t let attackers clone and modify your Android app and fight back with runtime protection.
App tampering and repacking are silent killers of mobile apps. Attackers can modify your APK, inject malicious code, and redistribute it as if it were yours. Luckily, there are solutions which make detecting tampering in Kotlin-based apps simple and reliable.
What is App Tampering & Repacking?
App tampering occurs when attackers alter your APK’s code, assets, or configuration without authorization. Once modified, they “repack” the app into a new APK and distribute it. Often spreading malware or tricking users into installing a counterfeit version.
Introducing the Talsec Portal: A New Way to Monitor Your App — Try It Now!
Benchmark your app’s security against global standards, understand your current posture, and uncover live threats—all in one place.
Talsec freeRASP and RASP+ have been a game-changer for countless of you, offering robust threat detection and protection. For a long time, providing advanced security insights was a feature you have been asking for. Previously, security telemetry has been a reactive process, relying on periodic PDF reports in your mailboxes that provided a delayed snapshot of your app's security vitals. Empowered by our strong community, we recognized the limitations of this approach and decided it was time for a change.
We are thrilled to introduce Talsec Portal, a comprehensive platform designed to provide interactive charts, readable security intelligence (such as data on rooted devices, fraudulent app clones, and unofficial store installations), and detailed security data for all your applications in one centralized place. The portal allows you to interact with the data yourselves, benchmark your app against global statistics, and use these insights to decide which protections to apply based on your specific threat landscape.
How Root Detection Works?
Root detection employs multiple methodologies, often in combination, to improve reliability. Below, we break down the key techniques:
1. Static Analysis
Static analysis involves checking the device’s filesystem and configuration for known indicators of root access without executing code that requires root. These checks look for static artifacts left behind by rooting. Key static analysis methods include:
public class UserSessionManager {
private String loggedInUsername;
private boolean isLoggedIn;
public boolean authenticateUser(String username, String password) {
// Authentication logic
}
public String getLoggedInUsername() {
return loggedInUsername;
}
}
public class a {
private String b;
private boolean c;
public boolean d(String e, String f) {
// Authentication logic
}
public String g() {
return b;
}
}
String apiKey = "YOUR_SUPER_SECRET_API_KEY";
String apiUrl = "https://api.example.com/data";
After string obfuscation, this might look like:
Java
String apiKey = new String(Base64.getDecoder().decode("WU9VX1NVUEVSX1NFQ1JFVF9BUElfS0VZ"));
String apiUrl = new String(Base64.getDecoder().decode("aHR0cHM6Ly9hcGkuZXhhbXBsZS5jb20vZGF0YQ=="));
import 'dart:io';
Future<bool> detectSuBinary() async {
// Common paths where the 'su' binary may exist on rooted devices
final suPaths = [
'/system/bin/su',
'/system/xbin/su',
'/sbin/su',
'/system/su',
'/system/bin/.ext/su',
'/system/usr/we-need-root/su',
'/system/app/Superuser.apk'
];
for (var path in suPaths) {
try {
final file = File(path);
if (await file.exists()) {
print("Potential root detected: su binary found at $path");
return true;
}
} catch (_) {
// Ignore errors for inaccessible paths
}
}
return false;
}
Rooting typically installs certain files not found on stock devices. For example, the presence of a superuser (su) binary (often in paths like /system/bin/su or /system/xbin/su) is a strong indicator of root
Identifying modifications in system partitions
Rooting usually requires altering the system partition or boot image. Static checks therefore inspect system properties and configuration for unusual values.
Detecting installed applications used for rooting
Many users install management apps after rooting to control superuser access. Static analysis can check the list of installed packages for names of known root apps
Static analysis is quick and straightforward, but by itself it can be bypassed (attackers might remove or hide these indicators). Therefore, apps often complement it with dynamic and behavioral checks.
2. Dynamic Analysis
Dynamic analysis techniques involve observing the device’s behavior at runtime and performing tests that can reveal elevated privileges. Instead of just looking for files, the app actively probes the system for root-only capabilities or anomalies. Key dynamic checks include:
Monitoring runtime behavior for signs of elevated privileges
One common approach is to attempt operations that should fail on an unrooted device but would succeed with root. For example, the app might try to execute a shell command that requires root access (such as invoking the su binary). On a non-rooted device, this either won’t execute or will prompt a failure, whereas on a rooted device the command may execute and return a root shell.
Intercepting or invoking API calls that reveal system modifications
Some root detection libraries inspect system APIs for abnormal responses that indicate tampering.
Checking process and memory modifications
More advanced dynamic analysis monitors the app’s own process and the system processes for signs of tampering. Root access often comes hand-in-hand with tools that can inject code or manipulate memory.
Dynamic analysis adds another layer of defense, because even if an attacker hides files, the act of using root often leaves some trace in behavior or system state. However, sophisticated root hiding tools aim to also neutralize these checks, leading to the need for behavioral analysis.
3. Behavioral Analysis
Behavioral analysis refers to monitoring the device or app for patterns and actions that are unusual in a non-rooted environment. Instead of specific file or API checks, this involves a broader observation of how the device and apps operate, which can indirectly signal that root access is present or being concealed. This approach is more heuristic and looks at the context of the device’s operation:
Some security solutions keep an eye on system-wide behavior that would only occur on a rooted device, especially one using root-hiding measures. For example, on a secure device certain directories and settings are off-limits — if the app notices those being accessed or changed, it’s suspicious
Analysing app permission escalations beyond normal user privileges
Apps on a rooted device can sometimes do things that should normally require special permissions or not be possible at all. A detection system might track if any app (or the OS itself) has granted itself abilities beyond the standard Android permission model.
No protection against advanced hiders like Shamiko
Provides detailed threat logging for analytics
Play Integrity (free, Google Play ecosystem, backend-dependent)
Determines whether user installed or paid for your app or game on Google Play
Determines whether your app is running on a genuine Android device powered by Google Play services
Automatic security updates
Dependent on external web service with rate limits (10k requests/day)
Commonly known bypass techniques
Limited to Google Play ecosystem, missing non-Play Store threats
How to Achieve Root-Like Control Without Rooting: Shizuku's Perils & Talsec's Root Detection
How to Prevent Magisk Root Hiding and Security Bypass
freeRASP for Kotlin Multiplatform Guide
freeRASP for Unity: Android Integration Guide
freeRASP for Unreal Engine: Secure Your Revenue
Introducing Multi-Instancing Detection for .freeRASP
How to Detect a Weak Wi-Fi: Guide to In-App Network Security Checks
A timestamp
Threat flags like tampering, bot automation, or suspicious execution state.
The first edge hop (NGINX, API Gateway, or Cloudflare Worker) acts purely as an AppiCrypt authorizer: decrypt cryptogram, check policy, compare nonce, and if overallAssessment is OK, forward to the upstream, otherwise reply with 401/403.
4
Because the authorizer is your code and uses your keys, you can run the same logic in another region, another CDN, or even directly on a dedicated WAF if the current edge provider is having a bad day.
Many sites were unreachable even though the origin servers were up and running.
Architectural Overview of AppiCryptWeb
AppiCrypt Checks Overview in our demo.
Challenges in Hook Detection
Detecting hooks is far from simple. It’s often described as a cat-and-mouse game between app defenders and attackers. Here are some key challenges in implementing effective hook detection:
Evasion by Attackers: As developers add new detection techniques, attackers find ways to evade them. For example, if an app scans for the string “Frida” in process names or memory, an attacker might use a modified version of Frida that changes those identifiers (renaming processes, using custom payloads without the word “Frida”). In fact, security researchers have noted that attackers frequently modify tools like Frida to evade detection by apps. This means an app that only checks for the stock version of a tool might miss a tweaked version. The cat-and-mouse dynamic is continuous: Defenders introduce new checks (scanning memory, enumerating libraries, tracking suspicious threads) and attackers respond with obfuscation, custom hooks, and runtime manipulation. It’s a constantly shifting battle, requiring app developers to stay updated on the latest attack techniques.
False Positives and Compatibility: The wide variety of Android devices, OS versions, and even custom ROMs means that some detection methods can mistakenly flag benign situations as hostile. A check that works on one device might misidentify legit behavior on another. For instance, certain pre-installed system apps or debugging services on custom Android ROMs might look like hooking tools to a simple scanner. On iOS, jailbreak detection code might occasionally misfire due to some obscure system configuration. This is a challenge: if an app is too paranoid, it might lock out innocent users (false positives), hurting user experience. Tuning the detection to be accurate without hampering legitimate usage is tricky.
Performance Overhead: Thorough hook detection can be resource-intensive. Continuously verifying memory integrity or scanning for anomalies can slow down an app and drain battery. Users expect apps to be fast and smooth – heavy-handed security checks that make the app lag will frustrate users. For example, performing deep memory scans repeatedly could make an app stutter. Developers must balance security with performance, perhaps checking only at strategic times (like app startup or before sensitive operations) rather than constantly. Still, the more lightweight the detection, the less it might catch; the more heavy-duty, the more it could impact performance. Striking the right balance is an ongoing challenge.
Attacker Interference with Detection: Ironically, an advanced attacker who knows an app has hook detection might try to hook the detection code itself. This is a kind of meta-attack: use hooking to disable or manipulate the very mechanisms meant to catch hooking. For example, if an app function is responsible for checking integrity, an attacker could hook that function and force it to always report “all clear.” This is particularly a risk if the app’s detection code is not well protected (which is why code obfuscation and other techniques are recommended, as we’ll discuss in best practices). Essentially, if the attacker gets even a small foothold, they may target the detection to blind the app. Building detection that’s hard to bypass even if partially subverted is a complex task.
In summary, while hook detection is essential, it’s not a one-and-done deal. Developers must be vigilant and adaptive. They have to anticipate that attackers are actively finding ways around whatever defenses they put in place. Despite these challenges, there are known best practices that can significantly strengthen an app’s resilience to hooking, which we’ll explore next.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Human factors and the human mind are as critical as technology and systems, as they can be exploited. For attackers, this is often the easiest route, as it requires no technological knowledge or firewall bypass; they simply use social engineering tactics like a phone call or a phishing email.
3
Focus Beyond the Perimeter
Companies should not only focus on the perimeter or endpoint detection and response (EDR) applications but also strengthen physical security and review visitor processes. The speaker suggests implementing processes, such as a centralized calendar, so someone knows if a specialist like a fire safety technician is scheduled to arrive, to prevent unexpected visits. Furthermore, regularly auditing all vectors (like firewalls, outdated Nginx servers, and source code), patching vulnerabilities, refactoring code, and raising cybersecurity awareness among employees is a good practice.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Technical articles focused on advanced strategies to detect and defend against mobile threats, including rooting, hooking, reverse engineering, and API abuse.
Talsec RASP+, AppiCrypt and freeRASP Guides and Features
This collection highlights cutting-edge tools and resources from Talsec designed to secure mobile apps through runtime application self-protection (RASP), API integrity checks, and anti-abuse measures.
At Talsec, we’re proud to lead the way as the #1 Flutter Security SDK, and our commitment to this growing framework runs deep. This curated collection showcases our ongoing efforts to protect Flutter apps.
Articles by our team members and guest experts (become one of them) that explore practical mobile security and threat defense topics for the developer community.
The Flaw in Simple Root Detection
Many developers start by implementing basic root detection checks. As detailed in previous Talsec's articles, these common methods include:
Checking for the su binary
This is the traditional superuser binary.
Looking for known root packages
Searching for apps like eu.chainfire.supersu or com.topjohnwu.magisk.
Checking build properties
Looking for "test-keys" in the build tags, which often indicates a custom or non-production ROM.
Articles About Root Detection
Here's the catch: Magisk is designed to defeat every single one of these checks.
Magisk operates "systemlessly," meaning it doesn't modify the core /system partition. Instead, it hooks into the boot process. When an app tries to check for su or other root indicators, Magisk intercepts the request and returns a "false negative," reporting that the device is not rooted. This makes simple, self-implemented checks dangerously unreliable and gives a false sense of security.
A Robust Solution: Talsec freeRASP
To effectively counter modern threats like Magisk, you need a specialized, multi-layered security solution. This is where Talsec's freeRASP and RASP+ SDKs come in.
Talsec RASP (Runtime Application Self-Protection) is security SDK designed to protect mobile applications at runtime. It goes far beyond simple file checks and uses a variety of obfuscated and advanced techniques to detect threats, even those actively trying to hide.
Key features of freeRASP include:
Advanced Root Detection
Capable of identifying sophisticated root-hiding frameworks like Magisk.
Emulator Detection
Detects if the app is running in an emulator or simulator, a common environment for attackers.
By integrating Talsec SDK, you offload the complex work of runtime security to a dedicated team of experts, allowing you to focus on your app's features.
How to Implement freeRASP in Your Android App
Integrating freeRASP is straightforward. Follow these steps based on the official documentation.
Step 1: Add the Repository
In your settings.gradle(.kts) file, add the Talsec Artifactory repository:
Kotlin
Step 2: Add the freeRASP Dependency
In your app-level build.gradle(.kts) file, add the freerasp dependency:
Kotlin
Step 3: Initialize Talsec
The best place to initialize Talsec is in your custom Application class.
Kotlin
With this simple integration, your application is now actively monitored for rooting, debugging, and other runtime threats.
For High-Stakes Applications: RASP+
freeRASP provides an excellent layer of foundational security for any application. However, applications in high-risk sectors like finance, healthcare, and e-commerce often face more advanced attacks, including:
Dynamic Instrumentation: Using tools like Frida or Xposed to hook into your app's code at runtime.
Reverse Engineering: Advanced static and dynamic analysis to steal algorithms or sensitive keys.
App Repackaging: Modifying your app and republishing it with malicious code.
For these threats, Talsec offers RASP+. This is an enterprise-grade solution that provides real-time threat intelligence, advanced protection against instrumentation, and dedicated professional support to harden your app's security posture.
Conclusion
Relying on simple su checks is no longer a viable security strategy against tools as sophisticated as Magisk. To truly protect your users and your data, you must adopt a modern RASP solution.
Start today with freeRASP to get immediate, robust protection against common runtime threats.
When your security needs grow, or if you are in a high-stakes industry, RASP+ offers the comprehensive protection your business requires.
To understand the full range of features and determine the right level of protection for your application, we highly recommend visiting Talsec's Plans Comparison page.
Bypass geo-restrictions (e.g., accessing services from unsupported countries)
Hide malicious activity like bot traffic or credential stuffing
Exfiltrate sensitive data undetected
Attackers often use common VPN apps (NordVPN, ExpressVPN, ProtonVPN) or system-level tunnels to disguise their actions. From a security perspective, detecting VPN usage is like knowing if a user is “wearing a mask”.
Usage of VPN does not automatically impose threat.
How to Detect VPN Usage?
Detecting VPNs isn’t trivial—many providers change IPs, use stealth protocols, or blend with normal traffic. DIY solutions (like hardcoding VPN IP ranges) are unreliable and outdated quickly.
Instead, use expert SDKs that:
Actively monitor for VPN interfaces and tunnels
Stay updated against new evasion techniques
Provide callbacks so your app can respond instantly
Popular Libraries for VPN Detection
freeRASP (by Talsec)
The most robust, developer-friendly and free choice for iOS.
Aside from VPN detection, it also contains additional security checks
Enterprise grade of checks
Might be expensive for small apps
Integration Example:
Comparison Table
Feature
freeRASP
Malwarelytics
Works Offline
Yes
Yes
Easy Integration
Yes
Yes
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
VPN usage can bypass app restrictions and pose security risks, but detection doesn’t have to be DIY or error-prone. Tools like freeRASP provide reliable, continuously updated detection, letting you respond proactively to potential threats.
👉 If you want VPN detection plus root, Frida, emulator, and tampering protection in one free package, start with freeRASP by Talsec.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
Real-world examples include:
Fake banking apps stealing credentials.
Modified games with cheat engines or hidden malware.
Apps stripped of ads, in-app/subscription purchases, or security checks.
Think of it like someone copying your book, rewriting a few chapters, and publishing it under your name. Only this time, it’s malicious software.
Statistics
Our data shows that around 0.08% of devices have breached app integrity.
Global Threat Rate for Tampering (source my.talsec.app)
More actual global data can be found at Talsec portal.
How to Detect App Tampering?
Detecting tampering isn’t just about checking the APK’s checksum once — attackers can bypass simple checks. Detection must be ongoing, multi-layered, and resistant to bypasses.
Manual or DIY solutions (like hardcoding hash checks) quickly become outdated. Instead, developers rely on expert-maintained SDKs that:
Verify APK integrity at runtime.
Detect manifest modifications and signature mismatches.
Prevent repackaged versions from running.
DIY Coding Guide
You can implement yourself simple integrity detection check like this:
Use freeRASP (free library by Talsec)
Talsec provides a universal solution that covers many of your app security needs:
Comes with 14 extra detections like root/jailbreak detection, Frida and hooking, emulators, debugging, screenshots, etc.
Used by 6000+ apps; #1 Mobile RASP SDK by popularity ()
Integration is straightforward and callback-based, allowing for simple and readable implementation of protection.
Integration Example:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
App tampering and repackaging let attackers modify your APK, strip protections, or spread counterfeit versions that steal data or revenue. Detection doesn’t have to be DIY or error-prone—simple checksum checks are easily bypassed. Tools like freeRASP provide reliable, continuously updated runtime protection with strong tamper detection and 14+ extra checks, letting you respond proactively to integrity breaches.
👉 If you want tampering detection plus root, jailbreak, Frida, emulator, debugging, screenshot, and malware protection in one free package, start with .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
The Talsec Portal is a dynamic web platform that transforms raw security data from all your applications into actionable intelligence, empowering you to move from passive observation to active defense. It’s not just a dashboard; it’s a command center for your mobile app security that equips your team with several powerful capabilities.
Real-Time Threat Monitoring
Gain immediate and continuous insight into the security events affecting your application. Our intuitive dashboard allows you to track the count, type, and frequency of threats over time, including critical intentional attacks like tampering, reverse-engineering (debugging and hooks), and emulator usage. You can identify anomalies as they happen and respond before they escalate into major incidents.
Deep-Dive Analytics and Incident Investigation
Go beyond high-level numbers. The Talsec Portal allows you to investigate specific security events with granular detail. Filter data by app version, operating system, device type, and geographic location. Understand exactly which parts of your user base are being targeted and how, enabling a precise and effective response.
Global Benchmarking
Context is everything. How does your app's security posture compare to the global average? The Talsec Portal allows you to benchmark your app against anonymized global statistics. This helps you understand whether the threats you're seeing are unique to your app or part of a wider trend, allowing you to allocate resources and prioritize your security efforts more effectively.
Information and Education Hub
The security landscape is constantly evolving. The Talsec Portal includes a dedicated section where you can access the latest articles, best practices, and technical documentation from our security experts. Stay informed about emerging threats and learn how to implement the most robust defense strategies, turning security into a continuous learning process for your team.
The Future of App Security is Proactive
The Talsec Portal marks a definitive shift from periodic, passive reporting to continuous, active defense. It provides the tools and insights you need to protect your applications, your data, and your users effectively. Stop waiting for last week's news and start securing your app in real-time. Empower your team with the visibility and control they need to stay one step ahead of the attackers.
Bot traffic now accounts for nearly half of all internet requests. The default answer - CAPTCHAs - is failing. Here’s a fundamentally different approach.
Want to see it in action? Try the live demo yourself. Open it in a normal browser, then try it with Playwright or Puppeteer and see what happens.
The CAPTCHA problem
CAPTCHAs were designed around a simple assumption: tasks that are easy for humans and hard for machines. That assumption no longer holds.
AI solves them. GPT-4V, Gemini, and purpose-built CAPTCHA-solving models break image and audio challenges with accuracy rates above 90%. Services like 2Captcha and Anti-Captcha offer API-based solving at fractions of a cent per challenge.
Click-farms bypass them. Cheap human labor in CAPTCHA farms solves challenges in bulk. The “human verification” literally uses humans — just not your users.
Users hate them. Every CAPTCHA is friction. Studies consistently show that CAPTCHAs reduce conversion rates, hurt accessibility, and frustrate legitimate users. You’re punishing real customers to slow down attackers who have already found workarounds.
The fundamental issue: CAPTCHAs challenge the user. But the user isn’t the problem - the runtime environment is.
A different question
Instead of asking “Are you human?” you can ask a better question:
“Did this request come from a real, untampered browser?”
This is the idea behind runtime attestation. Rather than interrupting the user with a puzzle, you silently inspect the environment where the request originates. A real browser running on a real device has properties that are structurally difficult to fake - especially when the inspection happens inside an isolated execution context that the attacker cannot easily observe or manipulate.
What runtime attestation looks like in practice
takes this approach. You add a lightweight SDK to your web app - a JavaScript + WebAssembly bundle that runs inside your page, invisible to the user. On every request, it evaluates two layers of signals:
Environment signals
Properties of the runtime that automation tools leave behind. Headless browsers, WebDriver-based frameworks, and scripted environments expose dozens of telltale signs: missing APIs, inconsistent browser objects, absent plugins, zero-size viewports. Individually, each signal is easy to spoof. Combined and evaluated inside an isolated WebAssembly context - where the attacker can’t observe or patch the checks - they become much harder to defeat.
Behavioral signals
How the user actually interacts with the page. Real humans produce messy, variable input: imprecise mouse paths, irregular typing rhythms, natural scroll patterns. Automation tools produce synthetic events with uniform timing and mechanical precision. The SDK analyzes mouse movement, keystrokes, scrolling, and clicks over time, building a behavioral profile that can’t be faked by dispatching a few synthetic events.
The result is a cryptographically signed, encrypted token - a cryptogram - attached to each API request. Your backend validates it. If the token is missing, invalid, or indicates a bot - the request is rejected. No puzzles, no friction, nothing visible to the user.
Why this is hard to bypass
The typical arms race with bot detection goes like this: you add a check, the attacker patches it, you add another check, and so on. Runtime attestation changes the dynamic in a few ways:
Checks run inside WebAssembly. Unlike JavaScript-based detection, the logic runs in compiled WASM modules. An attacker can’t set breakpoints, can’t monkey-patch the functions, and can’t inspect what’s being evaluated. By the time any result touches JavaScript, it’s already encrypted.
Multiple signal layers. Even if a stealth plugin patches the obvious environment signals (like navigator.webdriver), behavioral analysis provides a second, independent layer. Fooling both simultaneously is a much harder problem than fooling either one alone.
Cryptographic binding. Each token is bound to a specific request body and timestamp. Tokens can’t be replayed, can’t be reused across requests, and can’t be forged without the signing key embedded in the WASM. An attacker who intercepts a valid token still can’t use it for a different request.
No cryptogram, no access. A curl request, a Python script, or any call that doesn’t come from a browser running the SDK simply won’t have a token. The request is rejected at the edge before it reaches your application.
The stealth plugin problem
A common objection:
“What about puppeteer-extra-plugin-stealth? It patches all the known bot signals.”
Stealth plugins are good at making headless browsers look like real browsers to JavaScript-based detection. They override navigator.webdriver, fake the plugin array, spoof window.chrome, and more. Against a checklist of environment signals evaluated in JavaScript, they work.
Against runtime attestation, they face two problems:
The checks they can’t see.
When the detection logic is inside Wasm and the results are encrypted, the stealth plugin doesn’t know which checks exist, what they evaluate, or what the results are. It’s patching a surface it can only guess at.
Behavior can’t be faked with property overrides.
Stealth plugins don’t generate realistic mouse movement, natural typing rhythms, or human-like scroll patterns. They make the environment look right while the behavior remains mechanical. Behavioral analysis catches what stealth plugins don’t address.
What about AI agents?
A newer challenge: AI agents that browse with real Chromium instances — tools like Anthropic’s Computer Use, browser-use, and OpenAI Operator. These aren’t headless scripts; they control actual browser windows.
They still have to produce input events, and those events still have automation characteristics. Whether AppiCryptWeb catches them consistently is an active area of testing. Early results are promising, but we won’t claim coverage we haven’t validated. If this is your concern, - we can run detection tests against your specific threat model.
What this doesn’t do
Transparency matters:
Not a WAF. It doesn’t inspect request payloads for injection attacks. Use it alongside your existing security stack.
Not a rate limiter. It tells you whether a request came from a real browser, not how many requests to allow.
Not infallible. A sufficiently resourced attacker will always find new angles. The goal is to make automated abuse structurally expensive — not theoretically impossible.
Getting started
Add the SDK to your frontend
It’s a lightweight JavaScript + WebAssembly bundle. Two function calls: one to initialize, one per request to get a token.
Validate on the backend
Use the provided validator library or a ready-made edge adapter for Nginx, Cloudflare Workers, AWS Lambda, Azure, or GCP.
Decide your policy
Reject bots outright, flag for review, or apply step-up verification. The cryptogram gives you the signal; what you do with it is up to you.
No UI changes. No user friction. No CAPTCHAs.
Here's the entire frontend integration:
That's it. No UI changes. No user friction. No CAPTCHAs.
AppiCryptWeb is built by . If you want to try it against your own automation tests, check out the .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
How to Detect Jailbreak on Capacitor
Protect your Capacitor app from compromised iOS environments with smart detection.
Imagine you built a high-security facility, but one of your users decided to remove all the doors and disable the alarm system because they wanted "full control" over the building. That is essentially what a Jailbreak does to an iOS device.
What is Jailbreak?
Jailbreaking is the process of unlocking an iOS device to remove Apple's built-in restrictions. Much like rooting on Android, it gives users full administrative (root) access. This allows for the installation of apps outside the App Store and deep customization of system settings. Popular tools used to achieve this include checkra1n, unc0ver, palera1n, or Dopamine.
A jailbroken environment is a critical security risk. It removes the OS sandbox, allowing malicious actors (or even just buggy tweaks) to access your app's private data, Keychain items, and internal logic.
On a jailbroken device, attackers can:
Inject malicious code into your app.
Steal sensitive user data (tokens, stored credentials).
Disable or bypass security controls inside the app.
How to Detect Jailbreak?
You can either implement your own jailbreak detection logic or use a dedicated, specialized security SDK.
Building your own solution gives you full control over what you check and how you integrate it into your app. However, modern mobile environments are complex, and attackers increasingly use advanced hooking and masking techniques that can make straightforward checks less reliable.
Security SDKs address this by combining multiple detection signals, maintaining broader coverage, and continuously adapting to new techniques. As a result, many teams choose a specialized SDK to reduce maintenance effort and ensure more consistent, robust detection across a wide range of scenarios.
DIY Coding Guide
The most common "DIY" way to detect a jailbreak is to look for specific files and directories known to be created by jailbreak tools (Cydia, Unc0ver, Checkra1n).
Prerequisites: You will need a library to access the file system. In Capacitor, @capacitor/filesystem is the standard choice.
You can create a utility function that iterates through a list of "suspicious" paths. If any of them exist, the device is likely jailbroken.
freeRASP (free library by Talsec)
With freeRASP, the jailbreak detection utilizes hundreds of advanced checks, offering robust detection even with hiding methods applied.
Strong detections for modern jailbreaks .
and frequent updates.
Offline operation with minimal performance overhead.
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
A jailbroken device is a compromised device. For apps holding sensitive user data, ignoring this risk is dangerous.
DIY is cat-and-mouse: Checking for files like /Applications/Cydia.app is easily bypassed by "Hide Jailbreak" tweaks.
Use specialized tools: Libraries like freeRASP use multi-layered checks (permissions, protocol handlers, system calls) to detect jailbreaks even when they are hidden.
React Proactively
Don't wait for a data breach; detect the compromised environment immediately on app launch. If you want Jailbreak detection plus many more protections in one free package, start with freeRASP.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
How Secure Are Flutter Apps?
Security, Compliance, and Enterprise Readiness
A great mobile application isn’t just fast and user-friendly — it must also be secure. In sectors like fintech, healthcare, and government, customers and regulators place a high level of trust in your app. Losing that trust through a security breach or compliance failure can result in lost customers, regulatory fines, and long-term reputational damage. From Talsec data, we know that there are around 4% of insecure devices globally, across all platforms and frameworks.
Additionally, approximately 12% of devices (as of May 5, 2025) running Android 10 (released in 2019) or older versions lack critical security updates, leaving them vulnerable to a range of recently discovered remote code execution (RCE), escalation of privilege (EoP), and information disclosure (ID) vulnerabilities.
Major enterprises such as NuBank, BMW, and Alibaba run critical apps in Flutter while meeting strict security standards. With the right architecture and controls, Flutter apps can be just as secure and compliant as native apps.
How Secure Are Flutter Apps?
Short answer: Native (Swift) nor Flutter is inherently more or less secure. be hardened or left exposed. The security outcome depends on architecture, developer choices, and maintenance.
Runtime and Attack Surface
Every app begins with a runtime. Native apps run directly on iOS and Android runtimes, tapping into platform-level security features like Keychain, Keystore, and Secure Enclave. This tight integration often means fewer layers to worry about.
Flutter apps also produce native binaries through ahead-of-time (AOT) compilation, but they bring along the Flutter engine and framework. This extra layer, plus other features like platform-channel bridges, broadens the attack surface and requires careful handling.
Reverse Engineering and Code Protection
Once compiled, both native and Flutter apps can be analyzed. Native binaries can be reverse-engineered, though techniques like symbol stripping, R8/ProGuard (Android), and linker options help reduce exposure. Kotlin/Java bytecode is especially easy to decompile if not obfuscated.
Flutter binaries work a bit differently. Its AOT-compiled binaries don’t resemble standard Java bytecode, which makes casual decompilation harder. Still, they’re not immune. Flutter provides obfuscation and split debug info options — essential tools to raise the barrier against reverse engineers.
Platform APIs and Hardware-Backed Security
Modern mobile security leans heavily on hardware. With native development, features like Secure Enclave, hardware-backed Keystore, or attestation APIs are available directly, with little friction.
Flutter apps can access the same features, but usually through plugins or custom platform-channel code. That introduces an extra dependency — either you trust the plugin’s quality or you must maintain the bridge yourself.
Threat Data
And what data says about Flutter security? When we look at occurrence of incidents, we get roughly similar results as presented in the beginning:
Common Vulnerabilities and How to Prevent Them
Generally, we can divide mobile threats into static and dynamic threats.
Static Threats
These arise during the development phase and can usually be identified and resolved with secure coding practices and modern tooling.
Missing Obfuscation
Code obfuscation helps protect code by making it harder to read for humans, using various techniques such as call flow flattening, variable renaming, dummy code insertion, etc. This helps to protect against more advanced attacks by hiding business logic, so an attacker cannot use knowledge of the codebase for a more sophisticated attack.
Currently, Flutter's built-in obfuscator is not as robust as those available for C/C++ or Java/Kotlin, making string values and business logic more susceptible to discovery.
Hardcoded Secrets
Embedding API keys, tokens, or other credentials directly in source code is a major cause of leaks – either by accidental commits or through decompilation. Flutter's built-in obfuscation does not hide string values, so secrets are easily discovered even in “obfuscation” protected builds.
Talsec created Secret Vault to solve this problem: it dynamically provisions secrets, removes the need for hardcoded credentials, and safeguards sensitive information (like API keys, encryption keys, and tokens). Secret Vault actively protects these assets from leakage, reverse engineering, and automated extraction attempts, ensuring end-to-end data security.
Runtime threats
These threats are most often encountered when an app is deployed and running on user devices.
Privileged Access
A device can have elevated privileges over a normal device. These elevated rights (root or jailbreak) allow bypassing any built-in security measures that the system has. This is often misused by an attacker who can run any script or look inside any process without the knowledge of the app.
Dynamic Instrumentation and Hooking
Tools like Frida can manipulate an application while it's running on a device (also called hooking) and can get access to sensitive data or hijack a request by tapping into a function that is being executed by the device.
Application Repackaging and Tampering
Malicious actors may modify your app (e.g., injecting ads, unlocking premium features, or adding malware) and redistribute the altered version. Protecting against repackaging involves runtime integrity checks, signature verification, and anti-tampering techniques.
How to Prevent Them
There are multiple techniques you can use to improve the security of a Flutter application:
Static Analysis: Utilize SAST tools like MobSF or Guardsquare's AppSweep to find vulnerabilities in source and compiled code before release.
Dynamic Checks: Use RASP and real-time security monitoring to detect tampering, root status, and suspicious runtime behavior on live devices.
Dependency Scanning: Continuously scan for known vulnerabilities in third-party packages and maintain an updated Software Bill of Materials (SBOM).
Advanced Protections: RASP, API Security, and Anti-Malware
Advanced mobile security goes beyond basic application hardening. This section delves into cutting-edge protection mechanisms that operate at the runtime and API level, providing a multi-layered defense against sophisticated threats. We will explore Runtime Application Self-Protection (RASP), API security measures like certificate pinning and device attestation, and anti-malware techniques, all of which are crucial for safeguarding modern mobile applications and their users.
Runtime Application Self-Protection (RASP)
RASP is a security technology that embeds directly into the application runtime. Unlike perimeter defenses such as firewalls or intrusion detection systems, RASP allows the app itself to monitor and defend against malicious behavior while it is running.
Solutions such as Talsec’s RASP+ provide you with advanced detection of frameworks like Frida, Magisk, or Dopamine, as well as environment checks for developer mode or VPN usage.
API Protection
Even if a mobile app is well protected, attackers often target the APIs it connects to. Mobile APIs carry sensitive data, making them high-value targets. You can protect API by using:
Certificate Pinning
Ensures that the app communicates only with trusted servers. Typically, you can find static pinning, but some companies, like Talsec, provide . Dynamic pinning allows you to change certificates remotely, rather than having certificates hardcoded in the app and having to make a new version of the app to update them.
Device or App Attestation
Attestation is a test that checks whether the app and/or device on which the app is running is genuine and has not been tampered with. Attestation like this can then generate cryptographic proof (data) which are presented to the server. The server then knows if the app is healthy and, therefore, allows or denies requests. Check out Talsec's solution.
Anti-Malware Measures
Apps may run on devices already compromised with malware. Modern mobile malware abuses permissions, accessibility services, and overlays to steal credentials or intercept OTPs. measures focus on detecting hostile environments and blocking execution in unsafe conditions.
Talsec's Multi-Layered App and API Protection Model
L0 - Detect Attacks: Check app security state with freeRASP & Talsec Portal insights
L1 - Protect App: Pass pentests, combat reverse engineering, and comply with regulations with RASP+ and AppHardening (Secret Vault, Dynamic TLS Pinning)
L2 - Protect Transactions: Combat API abuse, bots, web-scraping and MiTM with AppiCrypt
Market Insights, Regulations, and Standards
Enterprises adopt mobile security SDKs not only to protect their applications but also to stay compliant with global regulations and industry frameworks. Standards like PSD2, DORA, MAS, and HIPAA are not abstract rules — they directly shape how secure your app needs to be.
PSD2 (EU payments): Requires Strong Customer Authentication and fraud protection. Apps must verify device integrity and secure transactions.
DORA (EU resilience): Focuses on operational resilience. Apps need runtime protection, and recovery readiness.
MAS TRM (Singapore): Mandates encryption, monitoring, and defense against tampering for financial systems.
Talsec’s RASP and API security solutions are designed with these regulations in mind. Key benefits include:
Simple integration — works even for small development teams.
Resilience to reverse engineering and bypass attempts — protects against tools like Magisk and Dopamine.
Wide threat coverage — rooting, hooking, tampering, malware, and more.
In practice, compliance is not only about passing audits or avoiding fines. It’s about building user trust, protecting brand reputation, and ensuring that your app can withstand evolving threats.
Keynote: Communty-Driven Security as Collective Defense with Tomáš Soukal (Talsec)
The Talsec Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
Tomáš Soukal (Talsec) delivered a keynote addressing how Talsec operates as a "community-based company" where security is "community-driven". Talsec currently protects thousands of applications running on almost two billion devices. The core of this approach is recognizing that the company's community includes both adopters (developers and users) and adversaries (attackers and penetration testers)
The Value of the Two Communities
Both communities, the adopters and the adversaries, play critical roles in improving the Talsec SDK.
Adopters
Developers and users who adopt the SDK provide essential feedback by asking questions, opening GitHub issues, submitting tickets, and testing the software. This community effectively acts as a global QA team, running the SDK on unusual devices and under extreme scenarios. Their input generates hundreds of issues and discussions across platforms, offering a scale of testing impossible to achieve internally.
Adversaries
Attackers contribute by performing penetration tests, identifying vulnerabilities in the free RASP product, breaking into the SDK, and publishing write-ups on platforms like Frida CodeShare. These public exploits serve as a valuable learning resource, enabling Talsec to rapidly develop fixes and strengthen the product. For example, the RASP 17 release incorporated dozens of updates addressing community-reported bugs, requested enhancements, and bypasses discovered by adversaries.
Key Security Challenges and the Community Solution
Talsec faces significant security challenges due to device diversity and platform fragmentation.
Fragmentation
Numerous Android and iOS versions complicate efforts to keep up with evolving threats, hacking methods, and new tools.
Compatibility
The SDK must remain compatible with modern build tools across native Android, iOS, Flutter, React Native, Capacitor, Cordova, and gaming platforms such as Unity and Unreal Engine.
Edge Cases
The SDK must handle uncommon scenarios, including pre-rooted devices, TV boxes, payment kiosks, and Raspberry Pi devices running Android. Budget devices often exhibit non-standard behavior, resulting in issues such as key store race conditions or media DRM differences.
Community contributions are essential for broad coverage and actionable feedback. Fixes and enhancements applied for one developer benefit all users, creating a “one-to-many-to-one” feedback loop that supports the development of one of the most widely used RASP solutions.
Feature Testing and Giving Back
Talsec uses a feature testing system within the free RASP product, operating in ignore mode so that application verdicts remain unaffected. This system allows testing of new ideas in the field, including free malware detection intelligence, with public participation.
The company also contributes back to the community through:
Knowledge Sharing: Publishing articles and sharing mobile security expertise through an authorship program.
Structured Contributions: Supporting projects such as the OWASP Mobile Application Standard, including a recent article on rooting.
Talsec Portal: Providing a platform where users can view data, statistics, trends, and the global state of mobile security, enabling the community to benefit from accumulated knowledge.
Technical Insights on RASP Functionality
Key technical aspects of RASP include:
Attackers and Privileges: RASP operates at runtime, monitoring the application while it runs. On devices with elevated privileges, such as rooted devices, RASP detects remnants of rooting frameworks, leftover files, and other artifacts.
App Review Process: While Google Play’s review process is partially automated, RASP checks for emulators without affecting the SDK’s high success rate (99.999…%). The SDK does not require or store dangerous permissions.
Free vs. Commercial RASP: Free RASP is designed to provide maximum security within technological limitations. RASP Plus offers enhanced protection, including high-level bypass mitigation for UI callbacks, which require additional build process modifications.
Talsec primarily serves clients in fintech, banking, and health tech, with additional clients in e-government, gaming, and industrial sectors.
Android Device State & Security Snippets
Below is a quick-reference guide for the most commonly requested Android device property and security checks.
1. Active USB Connection
If you want to know if a USB device (like a flash drive, keyboard) is plugged into the Android device (Android as Host), or if the Android device is plugged into a specific piece of hardware designed for it (Android as Accessory), you use the official UsbManager API.
2. Work Profile Detection
Checks if the application executing the code is running inside a managed Work Profile.
Note: Checking if the device has a work profile from a personal profile generally requires elevated permissions, so checking the current profile's state is the standard approach.
3. Device Encryption Status
Checks if the device storage is currently encrypted. Modern Android devices (Android 10+) are encrypted by default out-of-the-box.
4. Unknown Sources Enabled
Checks if the user has allowed the installation of apps from outside the Google Play Store. The method changed significantly in Android 8.0 (API 26) from a global setting to a per-app permission.
5. Bootloader Unlock Status
The simplest way to check this without setting up complex cryptographic attestation is by reading system properties via the command line.
Warning: This method is simple but can be spoofed by tools like Magisk on rooted devices. For strict enterprise security, you must use hardware-backed KeyStore Attestation (SafetyNet / Play Integrity API).
6. Last Security Patch Date
Retrieves the date of the last installed Android security patch. This is returned as a highly readable string in a YYYY-MM-DD format.
How to Detect Jailbreak using Swift
Need to secure your app against jailbreaked devices? Start here.
Jailbreaking may open new doors for iPhone users. For app developers, it opens dangerous backdoors for attackers. A jailbroken device removes Apple’s security boundaries, leaving device and also your app vulnerable to data theft, tampering, and malicious hooks. Luckily, there are modern security solutions, you can reliably detect and respond jailbroken devices.
What is Jailbreak?
Jailbreaking is the process of removing iOS restrictions, granting users root access to the system (similar to ). This lets them install unauthorized apps, tweak system settings, or bypass App Store policies.
How to Achieve Root-Like Control Without Rooting: Shizuku's Perils & Talsec's Root Detection
Explore Shizuku's root-like power for Android. Uncover this mobile security risk and learn how Talsec's RASP provides essential mobile app protection with robust root detection to safeguard your app.
In the world of Android, 'root' has always been the magic word for ultimate control. But what if you could wield that power without ever rooting your device ? Meet Shizuku . This innovative tool opens a door to a realm of privileged commands, allowing apps to perform powerful actions once reserved for the superuser. But convenience often comes with a hidden cost. While it enables incredible features, it also creates new, subtle attack surfaces.
In this article, we will explore how does this power app can exploit the user and his installed applications as well as how does steps in to stop these kinds of threats.
Future-Proofing for the Data-Driven Ecosystem: Securing Your Application and Data APIs
Focus on businesses leveraging ad-supported and affiliate business models.
If your business runs on data and advertising, an invisible enemy is draining your profits right now. Without advanced API and app protection, you are paying to support bots that plunder your cloud resources while app clonesandscrapers devalue the data you sell to advertisers. Every fake click and fraudulent request cuts directly into your margins, distorting your metrics and undermining the platform integrity required for scaling your business.
Here is a clear look at how a comprehensive security service, like , can directly benefit your app and data APIs, ensuring a fair, secure, and profitable ecosystem. Talsec offers more than just "app shielding." We provide the commercial-grade security infrastructure needed to clean up your advertising ecosystem, enable advanced fintech integrations, and ensure your data remains the gold standard for partners worldwide.
TechTalk: Best Practices for Keeping Your App Safe with Majid Hajian (Microsoft)
The Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
In the modern technological era, mobile application security is no longer a static goal but a continuous organizational effort. As Majid Hajian, a Solution Engineer at Microsoft, emphasizes, the rapid evolution of threat landscapes—marked by a 29% increase in mobile attacks in the first half of 2025 and a staggering 2,000% surge in AI-driven mobile threats—demands a fundamental shift in how we build and defend applications. This new paradigm moves away from traditional "castle and moat" perimeter defenses toward a model of constant vigilance and automation throughout the entire software development lifecycle (SDLC).
import RNFS from 'react-native-fs';
const detectSuBinary = async () => {
// Common paths where the 'su' binary may exist on rooted devices
const suPaths = [
'/system/bin/su',
'/system/xbin/su',
'/sbin/su',
'/system/su',
'/system/bin/.ext/su',
'/system/usr/we-need-root/su',
'/system/app/Superuser.apk',
];
for (const path of suPaths) {
try {
// RNFS.exists returns a promise that resolves to a boolean
const exists = await RNFS.exists(path);
if (exists) {
console.log(`Potential root detected: su binary found at ${path}`);
return true;
}
} catch (error) {
// Ignore errors for inaccessible paths or permissions issues
}
}
return false;
};
export default detectSuBinary;
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.withContext
import java.net.InetSocketAddress
import java.net.Socket
suspend fun detectFridaPorts(): Boolean = withContext(Dispatchers.IO) {
val portsToCheck = listOf(27042, 27043)
for (port in portsToCheck) {
try {
Socket().use { socket ->
socket.connect(InetSocketAddress("127.0.0.1", port), 200)
// If we reach this line, the connection was successful
println("Frida-like service detected on port $port")
return@withContext true
}
} catch (e: Exception) {
// Port not open or connection timed out; ignore and continue
}
}
return@withContext false
}
// build.gradle.kts (Module :app)
dependencies {
// ... other dependencies
implementation("app.talsec.android:freerasp:X.X.X") // Replace with the latest version
}
// MyApplication.kt
import android.app.Application
import app.talsec.android.Talsec
import app.talsec.android.TalsecConfig
import app.talsec.android.TalsecCallback
class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
// 1. Define the configuration
val config = TalsecConfig(
// Set your expected package name and signing hash
expectedPackageName = "com.your.app.package",
expectedSigningHashes = listOf("your_signing_hash_base64"),
// 2. Define the callbacks
callback = TalsecCallback(
// 3. Define actions for detected threats
onRootDetected = {
// WARNING: Rooted device detected
// Implement your response (e.g., terminate app, alert user)
},
onDebuggerDetected = {
// WARNING: Debugger detected
},
onEmulatorDetected = {
// WARNING: Emulator detected
},
onTamperDetected = {
// WARNING: App integrity compromised
},
onUntrustedInstallationDetected = {
// WARNING: App installed from an unofficial source
}
)
)
// 4. Start Talsec
Talsec.start(this, config)
}
}
import TalsecRuntime
let config = TalsecConfig(
appBundleIds: ["YOUR_APP_BUNDLE_ID"],
appTeamId: "YOUR TEAM ID",
watcherMailAddress: "WATCHER EMAIL ADDRESS",
isProd: true
)
extension SecurityThreatCenter: SecurityThreatHandler {
public func threatDetected(_ securityThreat: TalsecRuntime.SecurityThreat) {
print("Found incident: \(securityThreat.rawValue)")
}
}
public enum SecurityThreat: String, Codable, CaseIterable, Equatable {
// ... other cases ...
case systemVPN
// VPN detected
}
fun isAppTampered(context: Context): Boolean {
return try {
// Check if APK signature matches
val packageManager = context.packageManager
val packageName = context.packageName
val signatures = packageManager.getPackageInfo(
packageName,
android.content.pm.PackageManager.GET_SIGNATURES
).signatures
// Compare with expected signature hash (you need to hardcode your app's signature)
val expectedSignature = "YOUR_EXPECTED_SIGNATURE_HASH" // Replace with your app's actual signature
val actualSignature = android.util.Base64.encodeToString(
signatures[0].toByteArray(),
android.util.Base64.NO_WRAP
)
actualSignature != expectedSignature
} catch (e: Throwable) {
true // If we can't verify, assume tampered
}
}
Talsec.start(applicationContext)
override fun onTamperDetected() {
Log.w("freeRASP", "App tamper detected!")
// Optionally block sensitive actions or warn the user
}
val usbManager = context.getSystemService(Context.USB_SERVICE) as UsbManager
// 1. Android as Host: Checks if external USB devices are plugged INTO the phone
val isUsbDeviceConnected = usbManager.deviceList.isNotEmpty()
// 2. Android as Accessory: Checks if the phone is plugged into a USB accessory (like a car dock)
val isUsbAccessoryConnected = usbManager.accessoryList?.isNotEmpty() == true
How to Stop Bots Without CAPTCHA
Opening Keynote: Safety/Security Equilibrium with Sergiy Yakymchuk (Talsec)
TechTalk: Predictive Apps Protection with Sergiy Yakymchuk (Talsec)
TechTalk: Best Practices for Keeping Your App Safe with Majid Hajian (Microsoft)
OWASP Top 10 For Flutter – M10: Insufficient Cryptography in Flutter & Dart
Android Malware Detection SDK for Your App: Detect Risky & Suspicious Apps and Known Malware
OWASP Top 10 For Flutter – M9: Insecure Data Storage in Flutter & Dart
OWASP Top 10 For Flutter – M8: Security Misconfiguration in Flutter & Dart
OWASP Top 10 For Flutter – M7: Insufficient Binary Protection in Flutter & Dart
Talsec RASP+ and AppiCrypt for Apple TV Apps
TechTalk: Threshold Cryptography with Jan Kvapil (MUNI)
Keynote: Fingerprinting, Device Intel & Context with Martin Makarský (Fingerprint)
How To Detect Video Injection for KYC
Keynote: Cloudflare for AppSec with Anatol Nikiforov (Cloudflare)
Keynote: Communty-Driven Security as Collective Defense with Tomáš Soukal (Talsec)
Keynote: 20 Minutes to Banking-Grade Security with Mateusz Wojtczak (LeanCode)
Keynote: Raising the Bar with Software Protection with Béatrice Creusillet (Quarkslab)
Keynote: Red Teaming in Practice with Adam Žilla (Haxoris)
Keynote: Discovering the Power of AI Pentesting with Pedro Conde (Ethiack)
How to Detect Jailbreak using Capacitor
How to Detect Hooking using Capacitor
freeRASP for Kotlin Multiplatform Guide
Achieving Cloudflare Outage Resilience using AppiCryptWeb
How to Detect Root on React Native
How to Detect Jailbreak on React Native
How to Prevent Magisk Root Hiding and Security Bypass
How to Detect Hooking (Frida) on React Native
How to Detect a Weak Wi-Fi: Guide to In-App Network Security Checks
Future-Proofing for the Data-Driven Ecosystem: Securing Your Application and Data APIs
freeRASP for Unreal Engine: Secure Your Revenue
How to Detect Screen Capture & Recording using Kotlin
How to Detect Developer Mode on Android using Kotlin
How to Detect App Tampering & Repackaging using Kotlin
How to Detect Jailbreak on Flutter
How to Detect Root on Flutter
How to Detect Hooking (Frida) on Flutter
How Secure Are Flutter Apps?
How to Detect Emulator in Kotlin
How to Detect Root using Kotlin
How to Detect Jailbreak using Swift
How to Detect Hooking (Frida) using Kotlin
How to Detect Hooking (Frida) using Swift
How to Detect VPN using Swift
How to Detect VPN using Kotlin
AppiCrypt Against Time Spoofing: From Free Trial Abuse to License Fraud and Audit Log Corruption
Preventing Piracy and Cheating in Games: A Guide to Countering GameGuardian with Talsec
iOS Keychain vs. Android Keystore
Introducing Multi-Instancing Detection for freeRASP
Introducing the Talsec Portal: A New Way to Monitor Your App — Try It Now!
How to Achieve Root-Like Control Without Rooting: Shizuku's Perils & Talsec's Root Detection
freeRASP for Unity Guide [new!]
ApkSignatureKiller: How it Works and How Talsec Protects Your Apps
OWASP Top 10 For Flutter – M6: Inadequate Privacy Controls in Flutter & Dart
Simple Root Detection: Implementation and verification
Flutter - M5: Insecure Communication for Flutter and Dart
OWASP Top 10 For Flutter – M4: Insufficient Input/Output Validation in Flutter
OWASP Top 10 For Flutter – M3: Insecure Authentication and Authorization in Flutter
OWASP Top 10 For Flutter – M2: Inadequate Supply Chain Security in Flutter
OWASP Top 10 For Flutter - M1: Mastering Credential Security in Flutter
🚀A Developer’s Guide to Implement End-to-End Encryption in Mobile Apps 🛡️
Flutter Security 101: Restricting Installs to Protect Your App from Unofficial Sources
Learn how to implement the Secure Storage in Flutter and understand storage restrictions.
Dive into our full guide as Himesh Panchal walks you through creating a robust and secure authentication flow!
Introduction: Root Detection Basics
OWASP Top 10 For Flutter – M2: Inadequate Supply Chain Security in Flutter
Hook, Hack, Defend: Frida's Impact on Mobile Security & How to Fight Back
Emulators in Gaming: Threats and Detections
Exclusive Research: Unlocking Reliable Crash Tracking with PLCrashReporter for iOS SDKs
How to Block Screenshots, Screen Recording, and Remote Access Tools in Android and iOS Apps
How do you test a RASP? This guide will walk you through the entire process of RASP evaluation. It is written for penetration testers and RASP integrators.
Fact about the origin of the Talsec name
React Native Secure Boilerplate 2024: Ignite with freeRASP
Hacking and protection of Mobile Apps and backend APIs | 2024 Talsec Threat Modeling Exercise
Simple Root Detection: Implementation and verification
How to Detect Root
val userManager = context.getSystemService(Context.USER_SERVICE) as UserManager
// Returns true if the app is currently running inside a managed profile
val isRunningInWorkProfile = userManager.isManagedProfile
val dpm = context.getSystemService(Context.DEVICE_POLICY_SERVICE) as DevicePolicyManager
val status = dpm.storageEncryptionStatus
val isEncrypted = status == DevicePolicyManager.ENCRYPTION_STATUS_ACTIVE ||
status == DevicePolicyManager.ENCRYPTION_STATUS_ACTIVE_PER_USER
fun canInstallFromUnknownSources(packageInfo: PackageInfo): Boolean {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.O) {
return false
}
val uid = packageInfo.applicationInfo?.uid ?: return false
val mode = appOpsManager?.checkOpNoThrow(
AppOpsManager.OPSTR_REQUEST_INSTALL_PACKAGES,
uid,
packageInfo.packageName
)
return mode == AppOpsManager.MODE_ALLOWED
}
fun isBootloaderUnlocked(): Boolean {
return try {
val process = Runtime.getRuntime().exec("getprop ro.boot.flash.locked")
val reader = java.io.BufferedReader(java.io.InputStreamReader(process.inputStream))
val lockedState = reader.readLine()
// "0" typically means unlocked, "1" means locked
lockedState == "0"
} catch (e: Exception) {
false // Default to false if unable to read
}
}
val securityPatchDate = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
Build.VERSION.SECURITY_PATCH
} else {
"N/A" // Security patch dates were not standardized before Android 6.0 (API 23)
}
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Insecure Storage
Relying on SharedPreferences or other plain-text storage solutions exposes sensitive information to attackers who gain access to the device.
Debuggable Application
Debug builds include extra diagnostic capabilities useful during development but dangerous in production. If debugging features remain enabled when you release your app, attackers can access internal app state, trace code execution, or manipulate behavior.
TLS Pinning Bypass
TLS pinning ensures your app only communicates with trusted servers by verifying server certificates. This is often used as an API protection technique. Hackers can bypass this protection, making the app vulnerable to MITM attacks.
Malware
Even if your app is secure, malware installed on the user’s device can steal information or interfere with your app’s operation.
Penetration Testing: Conduct regular security audits using both manual assessments and automated DAST tools, including simulated attacks (red teaming) when possible.
L3 - Protect Users: Combat social engineering, phishing, malware with Device Risk Scoring and Malware Detection
HIPAA (U.S. healthcare): Requires strict protection of health data, including secure storage and encrypted communication.
Platform independence — not tied to Google Play Services, making it suitable for global use.
Actionable threat intelligence — reports and monitoring that support compliance documentation.
Data sovereignty — all security data remains under your organization’s control.
Appicrypt: Appicrypt ensures that the Talsec SDK initializes and runs fully. Running in the same process as the application, the SDK generates an encrypted payload with device and run state information. Clients attach this payload to HTTP headers, and the backend verifies integrity using a lightweight script. Clients retain full control of their encryption keys.
For attackers, it’s like getting the master key to device. With jailbreak tools like checkra1n, , or Dopamine they can:
Inject malicious code into your app
Steal sensitive user data
Disable or bypass security controls
Run debuggers and hooking frameworks like
If your app runs on a jailbroken device, its integrity is at serious risk.
How to Detect Jailbreak?
Detecting jailbreak isn’t as simple as checking for “Cydia” anymore. Attackers constantly adapt, and DIY detection methods become outdated fast.
In recent years a lot of expert-maintained SDKs appeared that evolve alongside jailbreak techniques:
freeRASP (by Talsec)
iOS Security Suite
These tools give you continuous protection without the need to reinvent the wheel.
Check out freeRASP and for industry leading jailbreak detection
Popular Libraries for Jailbreak Detection
freeRASP (free library by Talsec)
The most robust, developer-friendly and free choice for iOS.
Comes with like app integrity, runtime manipulation (hooking with Frida), emulators, debugging, screenshots, etc.
Trusted by
Integration Example:
iOS Security Suite
A lightweight, open-source, and community-maintained option for iOS jailbreak detection and app security.
Detects jailbreak indicators including file system changes, suspicious apps, symbolic links, and more
Includes additional checks (debugger, emulator)
Actively updated by the open-source community
Integration Example:
Comparison Table
Feature
freeRASP
iOS Security Suite
Accurate Jailbreak Detection
High
Medium
Works Offline
Yes
Yes
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Jailbroken devices aren’t just risky—they’re hostile territory for your app. By integrating jailbreak detection with tools like freeRASP, you can protect your users, safeguard sensitive data, and stay ahead of attackers.
👉 Don’t gamble on DIY scripts—secure your Swift app today with freeRASP by Talsec.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
Shizuku application is an open-source Android application that enables other apps to run or execute privileged commands and methods without the device being rooted. This means users no longer have to worry about the risk of `bricking` their device during the rooting process or navigating complex tools like Magisk and its various modules. With Shizuku, you can instantly grant an app the elevated privileges it needs.
source:
Do you want to remove a vendor application you never use, prevent an app from draining your battery in the background, or modify a pre-installed system app? Shizuku makes all of this possible. It unlocks a level of power that allows you to customize your device in ways previously impossible without root, granting you nearly all the benefits of admin privileges without the hassle.
Demonstration of attack using Shizuku
Most users perceive Shizuku as a beneficial legitimate tool, a gateway to enhancing their Android experience and unlocking functionalities normally restricted by the system. It's often seen as a legitimate way to customize and optimize their devices and applications.
However, this perceived helpfulness can be a dangerous blind spot. I will demonstrate a critical vulnerability: an overlay attack on a banking application and stealing credentials of a user. Through an attacker app leveraging the Shizuku API, I will show how it's possible to silently obtain all necessary Android permissions, bypass user interaction for permission grants, steal user credentials, and exfiltrate them to an external API. This will vividly illustrate how Shizuku, despite its legitimate uses, can be weaponized to severely compromise user and application security if an attacker gains control.
We will then explore how & steps in to provide robust, real-time protection against such advanced threats, highlighting its capabilities in detecting and mitigating these subtle yet potent attacks.
MainActivity
This is the main entry point of our SecureBank application that I made for this demo.
LoginActivity
Upon clicking Go To Login users are directed to this screen, which prompts for a username and password.
But how can a user be certain they are interacting with the legitimate application's login interface, and not a deceptive overlay from a malicious app?
This is the malicious app that is installed on the victim user's device:
The malicious app's first move is to leverage the Shizuku API. This powerful interface allows the attacker to execute privileged commands and, crucially, silently grant itself necessary Android permissions without any user prompts or interaction . This is a significant bypass of Android's security model.
Once the permissions are acquired, the malicious app initiates a background service. This service operates invisibly to the user, lying in wait for the opportune moment to strike.
Now let's try to open our SecureBank login page.
This screen closely resembles the login page of our SecureBank app, but it's actually an overlay created by the malicious AttackerAppJava. I've added a banner indicating AttackerAppJava to highlight that this is not the legitimate app - a real attacker would of course skip this step.
Unaware of the deception, the user proceeds to enter their sensitive username and password into what they believe is their banking application SecureBank .
The moment the user clicks the "Login" button on the fake screen, the attacker app executes its payload:
The malicious app writes the captured username and password to a file named secure_bank_creds.txt on the device's external storage. Crucially, this is done without any explicit user permission prompt for storage access, given by the silent permission acquisition facilitated by Shizuku.
Leveraging the INTERNET permission, the attacker app immediately sends these stolen credentials to an external API controlled by the attacker. This ensures the credentials are off the device and in the attacker's possession, even if the local file is later discovered or deleted.
How does it work and why is it so dangerous ?
Shizuku requires Developer Mode to be enabled on the Android device, along with Wi-Fi debugging, if you intend to connect via ADB remotely. It doesn’t obtain root access directly, but instead leverages the ADB debug bridge to execute commands on the device — even remotely through Wi-Fi debugging.
The APK utilizes this function along with the native library libadb.so to establish a connection to the ADB bridge, either over Wi-Fi or through a physical connection to a computer.
While it doesn’t allow execution of root-level commands, it can still perform any command that ADB typically permits on a non-rooted device.
Shizuku employs a binder service to set up and maintain communication with the ADB bridge. Through this service, it also keeps track of which apps on the device are requesting access to the Shizuku API.
By utilizing the privileged commands mentioned earlier, Shizuku establishes a JDWP (Java Debug Wire Protocol) connection. Rather than attaching the debugger to a specific app’s debug code, it redirects this connection to its own binder interface, thereby inheriting the ADB and JDWP privileges of the device owner.
The IShizukuService daemon runs in the background once the required permissions are granted. It enables Shizuku to execute privileged commands, establish inter-process communication (IPC), and manage communication channels accordingly.
This component is responsible for running the command shell for the Shizuku app. It uses AIDL (Android Interface Definition Language) interfaces to define callbacks and manage the execution of privileged commands.
Moreover, Shizuku offers many of the capabilities of an ADB connection without requiring direct access to ADB itself. As many are aware, the ADB shell can perform actions not normally permitted on standard Android devices, such as silently granting permissions, modifying system settings, or accessing protected directories.
However, this powerful functionality also makes Shizuku potentially dangerous if misused. If an attacker succeeds in installing a malicious app on your device maybe after using a link then they can exploit Shizuku to carry out harmful activities without encountering typical permission restrictions.
Demonstration of Talsec protecting the victim APK
Talsec's RASP protects us from these severe threats in mobile devices and protects us from malicious intentions of the attacker.
Let's see what happens inside the SecureBank application when it is protected with Talsec and the attacker tries to exploit it
1) RASP detected Dev Mode
2) RASP detected ADB enabled
Talsec's RASP detects the malicious environment and lets the app know that it is not safe to run in this environment. It warns to the user about the 2 necessary conditions for running Shizuku on the device -> Developer mode and Debugging Mode .
Without these modes enabled, Shizuku cannot be something that you might get afraid of.
Integrate Talsec RASP into your application to make your application as well as your users safe. Try freeRASP to learn your security state or use 2 month free trial to try out premium RASP+ to protect your app with maximum coverage; check the plan comparison here: https://www.talsec.app/ .
Security is more than just a defense—it's a key enabler for financial and data strategies.
Data-Driven Ecosystem Integrity: An advanced security solution ensures the integrity of the data collected for affiliate and advertising services. This means providing partners with high-quality, trustworthy data—data not coming from device farms or fraudulent sources—which builds confidence and maximizes ad revenue. We aren't just blocking hackers; we are cleaning up data so you can charge advertisers more and making your AWS bill lower.
Open Doors to New Services: Platforms with a large fanbase are perfectly set up to explore advanced services, such as fintech. Security can help future-proof thin app clients, opening a reliable path for new revenue drivers and features that allow you to monetize fan attention more easily and test new strategies globally.
Eliminate Predatory Activities: Security measures can cut off costly predatory activities. Stop resource-draining attacks where bots trigger expensive API calls (e.g., LLM tokens), ensuring your budget fuels real users, not scrapers.
The Comprehensive Security Shield
The core of a complete security offering is comprehensive, real-time protection that covers every angle of your platform.
1. Unmatched Bot and DDoS Defense
Strengthened DDoS Defense: By combining Web and Mobile coverage with Talsec (Android and iOS) and , you ensure protection across all user platforms. This comprehensive approach is essential for maintaining service availability and stability, and efficiently cuts off unnecessary requests before they reach your backend systems.
Control Your Data: Security helps you get your data under control by ensuring high-quality data collection. This includes the real-time detection of risky activities like VPN usage, device emulators (simulators), and unofficial store usage.
2. Maintaining Brand Trust and Fair Play
Prevent App Cloning and Copycats: Detect and block malware, fraud, and emerging threats in real time. Prevent fake app mods and copycats that could harm your brand's reputation.
Ensure Fair Game: Security helps maintain a fair game for every user and partner, minimizing Data Skew by preventing automatic click fraud and database attacks through client apps and dynamic request forgery that can distort performance data and compromise user information.
User Risk Profiling: Gain a deeper understanding of your user base with sophisticated user risk profiling.
3. Secure App Operations and Global Compliance
Secure Secrets Management: Utilizing a allows you to securely ship sensitive secrets and configs.
Global Security Posture: Observe the security posture of devices in different regions of the world, giving you a global view of risk. Get premium access to global app security threat intelligence, shared across the community, allowing you to stay ahead of the curve.
Compliance Made Simple: Security partners can help you ensure compliance and align with strict regulations like OWASP MAS, GDPR, , MDR, and NIS 2.
Talsec can reliably secure your app and data APIs, enabling your team to focus on innovation and growth without having security as an afterthought.
One of the primary strategies for securing modern mobile applications is the adoption of a Zero Trust architecture. This approach operates on the principle of "always verify," assuming that no device, user, or network is inherently safe. In the mobile context, this translates to runtime protections like Runtime Application Self-Protection (RASP), which can detect real-time threats such as jailbreaking or debugger attachments. It also requires continuous identity verification, ensuring that every server request is validated rather than relying on long-lived sessions. Furthermore, data protection must be absolute; all information should be encrypted and stored in platform-trusted secure storage rather than plain text.
To manage the complexities of the "invisible supply chain," where approximately 80% of an application's code is composed of external dependencies, organizations must implement a Software Bill of Materials (SBOM). An SBOM acts as an automated "ingredient list" for software, detailing every component, version, and vendor used. By analyzing these reports on every build, development teams can instantly identify and reject code containing compromised or outdated dependencies, ensuring both security and regulatory compliance.
Shifting Left and Defending with AI
A critical component of modern security is the concept of "shifting left," which means integrating security checks as early as possible in the development process. Implementing DevSecOps ensures that security is a shared responsibility across every phase of the SDLC. For example, using pre-commit hooks can automatically strip out secrets or personal data before code is ever committed to a repository. Finding and remediating vulnerabilities during development is significantly less costly than addressing them after an application has reached production.
As attackers increasingly use AI for sophisticated maneuvers like deepfakes and voice cloning, defenders must adopt Defensive AI to fight back. This involves using AI-driven tools to analyze traffic patterns for suspicious activity and implementing advanced liveness detection. Traditional biometrics, such as blinking an eye, can now be deepfaked; therefore, modern apps may need to monitor human behavioral gestures( such as how a user uniquely holds their device) to ensure identity.
Cultivating a Security-First Culture
Ultimately, robust technology must be supported by a strong organizational culture. Security is not the task of a single team but the responsibility of every individual in the company. Organizations should foster a "no-blame" environment where security issues can be reported and addressed proactively without fear of retribution. Furthermore, companies should track meaningful metrics, such as "time to remediation," rather than vanity metrics like lines of code, to ensure that vulnerabilities are addressed with the necessary urgency.
Building this foundation can be managed through a structured 30-60-90 day plan, starting with establishing baseline security foundations and gradually moving toward fully automated security pipelines. Security is an ongoing journey, not a final destination, requiring constant adaptation to stay ahead of an ever-changing threat landscape.
Thank you Majid Hajian for your insightful presentation on best practices for app security. Your discussion on shifting the security mindset towards continuous verification and the importance of a "security above all" culture was especially impactful. We appreciate you sharing your expertise and strategies like Zero Trust and DevSecOps with the community.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
AppiCrypt Against Time Spoofing: From Free Trial Abuse to License Fraud and Audit Log Corruption
Key Takeaways
⏰ Relying on device system time exposes your app to serious threats: free trial abuse, license fraud, unfair gaming advantages, certificate misuse, transaction and audit log corruption, and shopping app scams.
⚠️ Attackers can easily manipulate standard time APIs (like gettimeofday() or System.currentTimeMillis()), putting any time-based feature at risk.
🛡️ solves these vulnerabilities by delivering secure, cryptographically trusted time from your backend—preventing tampering, stopping license and trial abuse, and ensuring transaction and audit logs and business processes remain trustworthy.
✅ For robust protection against time spoofing and business-impacting exploits, integrating AppiCrypt is the smart move!
Time spoofing (time traveling), where an attacker manipulates a device's clock to bypass time-based restrictions, is a critical vulnerability for any modern application. Trusting the device's system time allows users to indefinitely extend free trials or gain unfair advantages in games and even scam shopping apps. This opens your app to serious security flaws, such as:
Free trial abuse: Attackers manipulate device time to indefinitely extend free trials.
License fraud: Manipulating device time can make expired licenses appear valid.
Corrupted audit logs: Attackers can alter timelines in logs, masking fraudulent activity.
AppiCrypt solves this problem by acting as a secure channel to deliver trusted time to a client device from your backend. By generating and validating a cryptogram for every network request, AppiCrypt ensures that the time data received is authentic and from an untampered application, making your app immune to time-based attacks and securing critical features against manipulation.
How Attackers Target the App?
The vulnerability arises from an application's implicit trust in timestamps provided by standard system calls and APIs—a trust that AppiCrypt replaces with cryptographic verification. APIs such as gettimeofday() on POSIX systems or System.currentTimeMillis() in the Java Virtual Machine directly query the device’s modifiable system clock. This allows adversaries to manipulate the clock’s value so that any subsequent call to these time-related functions yields a deceptive timestamp, compromising any feature that depends on it.
By falsifying the unverified, client-side time source, attackers exploit the application’s reliance on these system calls. This creates an easy avenue for abuse, as the app’s dependence on gettimeofday(), System.currentTimeMillis(), or similar methods lets attackers undermine safeguards that are based on trusted time.
They commonly use three primary vectors to do this:
Manual System Manipulation
This is the most straightforward method. An attacker simply navigates to the device's settings and manually sets the clock to a past or future date. While this is effective against any app with no time security, it's not very stealthy, as it alters the clock for the entire system.
Application-Level Hooking and Instrumentation
This advanced technique is used in compromised environments to change the time for your app alone, making the attack highly precise and difficult to detect.
Some common tools for this are:
Frida: Using this powerful toolkit, an attacker can attach to the running application, hook the specific function application's code calls to get the time—such as System.currentTimeMillis() in a Java/Android app or Date() in a Swift/iOS app—and replace its logic to return a fraudulent timestamp.
Xposed Framework
The Business and Security Risks to the App
Trusting the client-side clock is a critical vulnerability because time is a foundational component for many features. The consequences of time spoofing are severe and wide-ranging:
Corrupting Transaction Logs and Audit Trails
Accurate timestamps are critical for the transaction's security and sound audit trail logs. If an attacker gains access to a user's device (or through device farm), they can change the clock before acting. This manipulation makes it nearly impossible for any security team to reconstruct what happened during a security incident, as the logs will contain misleading timestamps.
Bypassing Time-Based Restrictions
This is the most common exploit. Attackers can indefinitely extend 30-day free trials, bypass cool-down periods in games, or repeatedly claim once-a-day rewards.
1. Extending a 30-Day Free Trial:
Normal Scenario : You install an app on August 10th. The app stores this start date. By September 10th, 31 days have passed, and the app's calculation correctly locks you out.
The Exploit: It's September 8th, and your trial is about to expire. You simply go to your phone or computer's settings and manually set the date back to August 15th. When you reopen the app, it asks for the currentTime and the OS lies, replying "It's August 15th". The app calculates August 15 - August 10 = 5 days. As far as the app is concerned, you are only on day 5 of your trial. You can repeat this "time travel" trick indefinitely, giving you a perpetual trial period.
2. Bypassing Cool-Down Periods in Games
Normal Scenario: In a game, you use a powerful ability or collect a resource. A timer appears: "Available again in 24 hours". The game has anchored the time you clicked the button.
The Exploit: Instead of waiting, the player quits the game, goes to their device settings, and advances the clock by 24 hours. They relaunch the game. The game's code asks, "What time is it?" and the OS happily reports a time that is one day in the future. The game calculates that more than 24 hours have passed and immediately makes the ability or resource available. This allows players to gain an unfair advantage by instantly skipping mandatory waiting times.
3. Exploiting Business Logic
Any time-based rule can be broken. A happy hour discount becomes available 24/7, a one-day promotional offernever expires, and voting systems can be manipulated.
Compromising Expired Certificates
Modern apps often rely on TLS/SSL certificates to communicate securely with the backend. An attacker can set the user's clock back to make an expired, compromised certificate appear valid. This could trick the app into sending sensitive user data to a malicious server in a Man-in-the-Middle attack.
Breaking TOTP Authentication Mechanisms
If the app uses Time-based One-Time Passwords (i.e. Google Authenticator, Authy, or Bitwarden Authenticator) , they are highly dependent on synchronized time. If a user's clock is significantly out of sync, the codes generated will be invalid, and the user will be locked out. For the developer and management team, this means a poor user experience and potential support tickets, creating a denial-of-service problem.
Common Fixes Are Not Enough
When faced with time spoofing, developers often consider two straightforward solutions, but both have critical flaws, especially for mobile applications:
1. Relying on Your Own Server Time
The most obvious fix is probably to have the app always ask a trusted backend server for the correct time. While this sounds secure, it immediately creates problems for the users:
Can be hacked. Frida, other instrumentation and reverse engineering techniques can break the safe delivery of your trusted backend time.
It breaks offline functionality. The moment the user's device goes offline—on a plane, in a subway, or in an area with poor reception—any feature relying on this time check will fail completely.
It adds network latency. Turning every time check into a network request can slow down the app and create a poor user experience.
2. Querying Public NTP Servers
The next thought might be to have the app directly contact a public Network Time Protocol server. This approach is also risky:
It's hard to implement securely: Building a secure NTP client in the app is complex, and mistakes can leave you exposed.
It still requires an internet connection, leaving the offline features unprotected.
It's vulnerable to the same attacks. If not handled with extreme care, app can fall victim to the very NTP Spoofing attacks you're trying to defend against.
The Appicrypt Solution: A Resilient, Trusted Time Source
After seeing how common fixes fail, provides you with a robust, multi-layered solution that works reliably whether your user's device is online or offline. It achieves this by creating a secure channel between your backend and the client application, solving the problem at its root.
The Rule of Trusted Timestamping: Replacing Local Trust with Cryptographic Proof
To defeat time spoofing, your app must stop trusting the local device's clock and instead rely on the principles of Trusted Timestamping. This formal process creates trusted time by using an external, secure authority instead of the easily manipulated local clock.
Technically, this works by creating a cryptographic hash of your data and sending it to a Time Stamping Authority. The TSA combines this hash with a precise time from its own verified source and digitally signs the entire package. This creates a tamper-proof token that is immune to local clock changes, making it the ultimate countermeasure to time spoofing.
How Appicrypt Applies This Principle
While a formal timestamp from a public TSA is too slow for most real-time app needs, Appicrypt intelligently adapts these same security principles for your specific environment:
Your backend acts as your app's private TSA.
The unique cryptogram generated by the SDK serves a similar purpose to the integrity-proving "hash".
The secure channel, backed by the AppiCrypt on your gateway, functions like the "digital signature" by ensuring the request is authentic.
Once your app receives this verified "true time," your business logic in the app uses it as a secure anchor. It then combines this anchor with the device's unchangeable monotonic clock, allowing it to maintain a precise and secure sense of time, even when your user is offline. This gives you the security of trusted timestamping principles with the speed your app requires.
Provisioning of App with Trusted Server Time Using AppiCrypt as Timestampping Enabler
1. Secure Time Anchor: When the application is online, an HTTPS request with AppiCrypt cryptogram is used to fetch (or periodically update) the true time and establish this as a secure time anchor . This initial timestamp is trusted and accurate.
2. Secure, Monotonic Clock Tracking: By leveraging the device’s internal monotonic (!) clock, your app always obtains the correct local device time with high precision.
3. Regular Operation: The application—protected by Talsec RASP and AppiCrypt—uses this trusted time for it's operation, logs, sensitive features, or transactions.
4. Tamper Detection and Response: If a significant discrepancy between trusted time and device-provided time is detected, the app can immediately flag a potential time spoofing attack and respond accordingly, such as by terminating the app or disabling premium features.
By integrating Talsec RASP and AppiCrypt into their apps, developers are protected through an automated detection-and-response model, removing the need for complex, manual time checks.
Handle App Security with a Single Solution! Check out Talsec's premium offer:
ApkSignatureKiller: How It Works and How Talsec Protects Your Apps
In this article, we will explore how Android protects against app tampering, discussing not only how ApkSignatureKiller works, but also the mechanisms behind.
Introduction
Ever wondered how your Android phone can tell if that Instagram app you're about to install is the genuine application, or just a sneaky clone repackaged by a hacker with malicious intent? That's where APK signatures come in – they're the digital gatekeepers of the Android app world! Think of them as a high-tech, unforgeable seal of authenticity stamped on every app, verifying its true origin and guaranteeing the code hasn't been illicitly altered since the developer signed off on it. This critical verification happens every time you install or update an app, acting as an invisible shield that ensures the software you're running is legitimate and safe.
How does the Android signature verification work?
Android apk sign verification has mainly two steps:
1. Signing
When development is complete, the developer signs the app using a private key. This process generates a digital signature of the app's contents and embeds the developer's public key certificate within the APK file.
2. Verification
This process is performed by the Android OS on the device before the installation of an app.
First, the Android package manager calculates a cryptographic hash of the APK's contents.
Next, it extracts the developer's public key certificate from the APK and uses it to decrypt the digital signature. This decryption reveals the original hash of the app as calculated by the developer.
Finally, the hash calculated on the device is compared to the developer's original hash
To prevent anyone from bypassing this verification mechanism, Android utilizes several signature schemes. The primary difference between them is how they sign the application and store the resulting data inside the APK:
v1 (JAR Signing): This original scheme individually signed each file within the APK and stored the signature data inside the META-INF/ directory (e.g., MANIFEST.MF`, `CERT.SF ). This method was computationally slow and had a critical flaw: it did not verify the entire APK file. Sections like the ZIP metadata were left unsigned, creating an attack vector where malicious code could be injected into the APK without invalidating the signature.
v2 Scheme: Introduced in Android 7.0, this scheme verifies the entire APK file as a single blob. The signature is stored in a dedicated APK Signing Block, located just before the ZIP Central Directory . This approach is significantly faster and closes the vulnerabilities present in the v1 scheme. However, this scheme did not originally support signing key rotation, meaning a developer could not change their signing key without breaking updates for their app.
Vulnerabilities related to Android Signatures
The methods employed by ApkSignatureKiller are the modern versions of critical vulnerabilities of Android signature verification process.
1. The famous Master-Key vulnerability: This critical vulnerability exploited a discrepancy in how Android handled ZIP archives . An attacker could include two files with the same name within an APK. The package installer would process one file when verifying the signature, while the Dalvik/ART runtime would execute the other, malicious file. This allowed an attacker to inject and execute arbitrary code within a validly signed application, effectively bypassing the v1 signature check.
2. The Janus vulnerability: This vulnerability specifically targeted the v1 signature scheme. An attacker could prepend a malicious DEX file to the beginning of a legitimate, signed APK file. Because the v1 signature verifier would check the integrity of the ZIP entries but ignore the header of the file, it would still validate the application as authentic. However, the Android runtime would see the malicious DEX file at the start and execute its code, effectively running a malicious payload while the app appeared legitimate. This is simlar to the methods used to bypass v1 signature scheme by the apkSignatureKiller application .
The ApkSignatureKiller
The infamous ApkSignatureKiller application actively bypasses Android's entire signature verification system. This capability is frequently exploited to tamper with critical applications, such as banking apps, which are then used on a device, creating a major security headache for developers who rely on signature checks to ensure app integrity.
Let us take an example of an Android app signed by v1 signature scheme named victimApp2 :
We can also check which signature schemes are verified inside a fully built apk using apksigner tool. This shows that the apk has been signed using the v1 signature scheme.
Now let's try to install the apk inside the Android emulator with < Android 7.0 to ensure that we are able to install apk with just v1 signature scheme. With v1 scheme enabled we are easily allowed to install the apk on the device.
Now let's try the same with the unsigned apk on the same device. This tells us that the apk is unsigned and cannot be verified. The device also denies to download the unsigned application.
Here enters the ApkSignatureKiller who changes everything. Now let's just hook and modify our signed installed application using the ApkSignatureKiller. For this we have to push our signed application into the directory that can be accessed by the ApkSignatureKiller app just like the external storage directory.
Let's open our evil app
Now just choose the signed app from the external directory and then press the Hook button.
Press Install and guess what we are able to install the app with the killed signature verification mechanism.
We can also verify if our new app has been actually modified or not.
This shows that an unsigned, tampered application can be installed on a device without being detected by Android's security mechanisms. It proves that by using this method, it's possible to alter an app, recompile it, and install it without a valid signature, leaving the app's contents completely vulnerable.
To keep this article straightforward and easy to understand, I have used a simple demo application and focused on bypassing only the v1 signature scheme. However, it is crucial to understand that modern versions of ApkSignatureKiller and similar tools, are capable of bypassing the much more secure signature schemes. This makes them some of the most dangerous tools in the hands of attackers today.
Working of ApkSignatureKiller
But some of you with inquisitive minds might wonder: How does this tool actually work? How is it able to bypass the signature verification on an Android device ? 🙁
To bypass the signature check, the tool doesn't just remove the signature; it employs a more deceptive technique. It injects its own malicious code into the target application by hooking the specific part of the Android framework responsible for verifying an app's integrity. This injected code then intercepts the verification process and falsely reports to the system that the APK is still securely signed, even though its original signature has been stripped and its contents have been altered.
These are the methods used by it to hook the application and remove its signature file.
It basically hooks the classes like PackageManager or ContextImpl that are generally used during reflection that helps in signature verification of the Android application.
Then it uses this above method to replace the application with new application that does not have any signature verification mechanism and will always be verified by the Android signature verifier no matter how much the apk is tampered.
It hooks the code that fetches the signature file for verification and instead modifies the method to return true or verified always.
Why is it so crucial ?
Think of your Android device as a car and its signature verification system as a sophisticated car alarm. As long as the alarm is active, a thief cannot steal the stereo without setting it off.
However, a tool like ApkSignatureKiller acts as a master key that doesn't just break the window but cleverly disables the entire alarm system. Once the alarm is off, the thief can freely open the doors, steal the stereo, or even swap out the engine parts without anyone knowing.
This is precisely why signature verification is so crucial for an Android app. Without it, anyone could tamper with an app's contents, recompile it, and distribute their own modified version . Imagine the consequences: someone could unlock Spotify Premium for free, use cheats in a game like Clash of Clans, or, far more dangerously, bypass security measures in banking applications to authorize fraudulent transactions.
ApkSignatureKiller threatens the very foundation of Android's application security model. Despite continuous efforts to harden the platform, this tool often succeeds in its malicious goals.
But luckily, as developers, we are not helpless. There are concrete steps we can take to safeguard our applications against such attacks:
How to prevent tampering attacks with Talsec RASP+? [2 Months Free Trial!]
Talsec's (Runtime App Self-Protection) offers a multi-layered defense to shield mobile apps from tampering and malicious tools like ApkSignatureKiller. These tools are built to bypass Android's fundamental security measure: verifying an app's digital signature. By disabling this check, an attacker can modify a legitimate app, inject malicious code, and then repackage it.
RASP protects against that with:
Signature and Certificate Verification: Unlike system-level checks that can be intercepted or disabled, RASP operates within the application itself. It continuously validates the app’s signature and signing certificate hash, making tampering significantly more difficult.
Code and Resource Integrity Checks: RASP doesn’t stop at verifying signatures—it actively monitors the application’s code and resources. If the app has been decompiled, modified, or augmented with malicious code, RASP flags and responds to these unauthorized changes.
Real-time Threat Response: When RASP detects a threat, it triggers a callback function—giving developers full control over how their app responds. Once integrated, you can implement callbacks such as
The onTamperDetected callback plays a crucial role in identifying and responding to signature verification issues within the application. This powerful layer of security is easy to integrate—any developer can add it in just minutes. With Runtime Application Self-Protection (RASP), strengthening your app's defenses has never been simpler.
Try it for free for 2 months and experience how effortlessly you can boost your app’s protection; check out to request the trial.
written by Akshit Singh
Keynote: 20 Minutes to Banking-Grade Security with Mateusz Wojtczak (LeanCode)
The Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
Mateusz Wojtczak, Head of the Flutter department at , delivered a keynote sharing the company's experience and developer-centric perspective on using Flutter in highly regulated and secure environments, such as banking and fintech. LeanCode is a leading software house in the "Flutter bubble" and a member of the Polish Banking Association, involved in building large-scale Flutter apps, including banking and fintech applications.
import { getCryptogram, setMagicFile } from "appicrypt-web";
// Once, on page load
await setMagicFile(await (await fetch("/tscfg.txt")).text());
// Per request — attach a cryptogram as a header
const body = JSON.stringify({ action: "checkout" });
// Nonce can be any unique bytes; hashing the request body lets the backend verify it matches
const nonce = new Uint8Array(await crypto.subtle.digest("SHA-256", new TextEncoder().encode(body)));
const cryptogram = await getCryptogram(nonce);
fetch("/api/checkout", {
method: "POST",
headers: { appicrypt: cryptogram },
body,
});
import { Filesystem, Directory } from '@capacitor/filesystem';
const detectJailbreakDIY = async () => {
// A list of common files found on Jailbroken iOS devices
const jailbreakPaths = [
'/Applications/Cydia.app',
'/Applications/RockApp.app',
'/Applications/Icy.app',
'/usr/sbin/sshd',
'/usr/bin/sshd',
'/usr/libexec/sftp-server',
'/Applications/WinterBoard.app',
'/Applications/SBSettings.app',
'/private/var/lib/apt/',
'/Library/MobileSubstrate/MobileSubstrate.dylib',
'/bin/bash',
];
for (const path of jailbreakPaths) {
try {
// Logic adapted for Capacitor Filesystem
const status = await Filesystem.stat({
path: path,
// System paths are usually accessed directly on iOS
});
if (status) {
console.warn(`Jailbreak artifact found: ${path}`);
return true;
}
} catch (error) {
// Access errors might happen due to permissions or non-existence, ignore them
}
}
// Additional Check: Can we write to a system folder? (Sandbox Escape)
try {
const testPath = '/private/jailbreak_test.txt';
// Attempt to write outside the standard app sandbox
await Filesystem.writeFile({
path: testPath,
data: 'test',
encoding: 'utf8'
});
// If successful, clean up
await Filesystem.deleteFile({ path: testPath });
console.warn("Sandbox escape detected! (Write access to /private)");
return true;
} catch (e) {
// Failure to write is good (Normal behavior)
}
return false;
};
import { startFreeRASP } from 'capacitor-freerasp';
// reactions for detected threats
const actions = {
privilegedAccess: () => {
console.log('privilegedAccess');
},
}
const config = ...
// returns `true` if freeRASP starts successfully; you can ignore this value
const started = await startFreeRASP(config, actions);
import TalsecRuntime
let config = TalsecConfig(
appBundleIds: ["YOUR_APP_BUNDLE_ID"],
appTeamId: "YOUR TEAM ID",
watcherMailAddress: "WATCHER EMAIL ADDRESS",
isProd: true
)
extension SecurityThreatCenter: SecurityThreatHandler {
public func threatDetected(_ securityThreat: TalsecRuntime.SecurityThreat) {
print("Found incident: \(securityThreat.rawValue)")
}
}
public enum SecurityThreat: String, Codable, CaseIterable, Equatable {
// ... other cases ...
case jailbreak = "privilegedAccess"
}
let jailbreakStatus = IOSSecuritySuite.amIJailbrokenWithFailMessage()
if jailbreakStatus.jailbroken {
print("This device is jailbroken")
print("Because: \(jailbreakStatus.failMessage)")
} else {
print("This device is not jailbroken")
}
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Gaining unfair advantages in games: Users can exploit time-based mechanics to cheat.
Scamming shopping apps: Attackers trick apps to gain unauthorized discounts, bonuses, or offers.
Trusting compromised security certificates: Spoofed time can let attackers use invalid or expired certificates.
: On Android, this type of framework allows an attacker to install a specialized module that targets the app specifically. The module can be built to
intercept any time-related system call
the app makes and provide a fake time on demand, completely bypassing the app's intended logic without altering the system clock.
Network-Level Deception (NTP Spoofing) This method targets how the user's device automatically keeps its time accurate by attacking the Network Time Protocol. This isn't an attack on the app's code directly, but on the environment where it runs—specifically, the user's local network.
It can't protect offline features. Any part of the app that needs to work with time while offline remains completely vulnerable.
. If they match, it confirms that the APK's contents have not been tampered with since it was signed.
v3 Scheme: Introduced in Android 9.0, this scheme is very similar to v2 but adds support for signing key rotation . It includes an attribute in the APK Signing Block that holds a history of signing certificates. This allows developers to change their app's signing key while enabling the app to be verified using either the new or older keys, ensuring seamless updates.
v4 Scheme: This scheme was introduced to support streaming installs, allowing for parts of an app to be used before the entire APK is downloaded. For v4 signing, a Merkle hash tree of the APK's contents is calculated, and its root hash is stored in a separate file named .apk.idsig . This allows for the incremental verification of individual blocks of the file as they are streamed to the device.
onRootDetected
,
onDebuggerDetected
,
onEmulatorDetected
, and more. These powerful tools let you tailor defensive actions: show custom alerts, limit app functionality, or shut down the app entirely if the environment is compromised.
There is prevailing skepticism that cross-platform technology is "implicitly much less trusted" compared to native development. When working with clients, consultants, or performing code audits, LeanCode frequently encounters objections such as, "Flutter is not native, so Flutter is not secure."
This misconception stems from misunderstanding what Flutter actually is. Flutter is an open-source framework for building "beautiful natively compiled multiplatform apps from a single codebase." A Flutter app is not a "glorified web view" or interpreted JavaScript code. Instead, it compiles Dart code into machine code, producing an Android app in Kotlin or an iOS app in Swift.
Key security-relevant features of the Dart language include:
Compilation: Dart compiles to native machine code, improving performance and making it harder to reverse engineer than bytecode or intermediate language.
Type Safety and Concurrency: Dart is type-safe, null-safe, and garbage-collected. Its simple concurrency model only allows passing messages across isolates without shared memory, helping developers avoid common vulnerabilities related to shared memory.
No Reflection: Dart lacks reflection, preventing access to runtime type information and reducing the risk of certain coding mistakes that could lead to vulnerabilities.
Real-World Banking Experience with Flutter
LeanCode began working with Flutter in 2021 on a project for the Polish branch of Crédit Agricole. The bank boldly chose to go "full Flutter" for their new mobile banking app, even though the platform was less mature at the time. The resulting application, now among the top-ranked mobile banking apps in Poland, had to meet strict non-functional requirements, particularly around security.
Experience from this and other banking projects, including one for Virgin Money in the UK, led to several key security observations:
Code and Runtime Integrity: Dart's Ahead-of-Time (AOT) compilation and obfuscation proved very effective. Penetration testing revealed no issues with code injection or Dart runtime vulnerabilities.
Networking Stack Security: Flutter uses a separate networking stack from Dart, which initially caused concern for penetration testers. However, teams successfully implemented SSL pinning (domain, root, and intermediate certificates) and public key pinning. The HTTP client API allows developers to easily switch to native HTTP stacks like OkHttp or iOS clients, or use existing core HTTP client libraries from the bank.
Leveraging Native APIs: Secure Flutter apps depend heavily on native functionality. Plugins such as flutter_secure_storage utilize native key stores, encrypted shared preferences on Android, and the iOS keychain. Biometrics, secure enclave encryption, and third-party SDKs for fraud detection or KYC all rely on the same native APIs as traditional apps. Developers must understand the native implementation behind each package.
RASP Integration: Multiple Runtime Application Self-Protection (RASP) providers integrated with Flutter. While most RASP features are native, some providers initially flagged false positives. Talsec, however, supports Flutter as a "first-class citizen," offering a RASP solution aligned with Flutter’s security considerations.
The Core Insight: Security Depends on the Code, Not the Technology
Code audits often reveal vulnerabilities unrelated to Flutter itself. Common issues include:
Insecure Data Storage: Storing sensitive data in public files or unencrypted shared preferences.
Misuse of Biometrics: Using biometrics only for local authentication without encrypting personal data. This can weaken security if apps store passwords to mimic protection.
Logging and Credentials: Logging sensitive information or storing passwords insecurely.
Ultimately, security depends on how code is written, not on the technology used. Applications can be secure with JavaScript or web views and insecure with native apps. Every line of code must be evaluated for security, as tests only reveal weaknesses—they do not create security.
Thank you Mateusz and LeanCode team for sharing your experience with Flutter in high-security environments. Your insights demonstrate that cross-platform development can achieve the same level of security as native apps when code is written thoughtfully.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
Android Malware Detection SDK for Your App: Detect Risky & Suspicious Apps and Known Malware
Talsec Malware Detection SDK scans Android devices for RATs, keyloggers & SMS forwarders without QUERY_ALL_PACKAGES. Play Store compliant, offline-capable.
Historically scanning device apps meant using the QUERY_ALL_PACKAGES permission, which Google heavily restricts since Android 11 - apps using it for general security scanning are routinely rejected from the Play Store. Talsec's Malware Detection SDK solves this. It uses a targeted approach that stays fully compliant with Google Play policies.
How the Detection Works
The malware detection diagram below shows the multi-layered detection pipeline of the Talsec Malware Detection SDK that every app on the device goes through. The App Reputation API is the only online step; all other stages run entirely on-device. This means the detection can operate in a fully offline mode when that better suits the use-case or compliance requirements. Because there is no one-size-fits-all solution, Talsec Malware Detection is designed to be configurable so every team gets the best fit for their needs.
1
On-Device Blocklist
The system first checks a customizable local list of known-bad hashes and package names. Any match is flagged immediately without further processing.
2
Note: The defaults called "High Security Configuration" work for most scenarios. Need something tailored? Our experts can suggest the configuration to fit your requirements.
Did you know? Established stores like Google Play, Samsung Galaxy Store, and Huawei AppGallery all scan apps for known malware before making them available for download - so apps from these sources arrive pre-vetted.
Attackers rarely try to exploit the Android OS directly anymore. Instead, they build specialized malicious apps designed to interfere with your app. These threats are often custom-built and rapidly modified, so universal malware databases struggle to keep up. Talsec's Malware Detection focuses on catching apps specifically engineered for fraud and abuse. The detection signals are built around a set of high-risk permissions that legitimate apps almost never request in combination:
READ_SMS, RECEIVE_SMS, RECEIVE_WAP_PUSH: The core fingerprint of SMS Forwarders and OTP Stealers - trojans that intercept incoming messages to bypass two-factor authentication.
BIND_ACCESSIBILITY_SERVICE: Abused by Overlay Trojans to draw fake login screens over banking apps, and by Keyloggers & Surveillance Spyware to capture keystrokes silently.
Other threat types such as Call-Intercepting Trojans, Clipper Malware, and app Copycats are also detectable by the detection engine.
Want to see these attacks in action? Check out our :
Detection by Requested or Granted Permissions
Talsec evaluates permissions at two levels - requested (declared in the app's manifest) and granted (actively approved by the user) - with the detection scope configurable to match your threat model. Flagging toxic combinations like BIND_ACCESSIBILITY_SERVICE + READ_SMS + BIND_DEVICE_ADMIN at the manifest level means Talsec can catch brand-new, zero-day malware that signature-based antivirus databases have never seen.
Active System Settings as an Attack Vector
Scanning app manifests is only half the picture. The other half is checking what is actually active at the OS level. An app that has been granted elevated system privileges by the user - even a legitimate-looking one - is immediately more dangerous than one that merely declares permissions in its manifest. Talsec monitors both.
Android deliberately gives users more control than iOS. The problem is that malware actively social-engineers users into enabling high-risk settings they don't fully understand. The following Settings areas are the primary targets:
Accessibility Services (Settings > Accessibility > Installed Services): Any non-system app with an active accessibility service can read all on-screen UI content, simulate taps, and capture keystrokes - without any further permission dialog at runtime. Users are routinely tricked into enabling these for fake "performance optimizers" or "battery savers."
Keyboard (IME) apps with Accessibility enabled: A third-party keyboard that also holds an active accessibility service is a textbook keylogger setup. The keyboard sees every character typed; the accessibility service gives it the ability to observe and interact with the surrounding UI.
Check out Demos
Keyloggers Detection Demo:
Remote Access Tools Detection Demo:
SMS Forwarders Detection Demo:
Filtering by Installation Source
Scanning every app on a device generates noise - and the most significant source of that noise isn't random apps, it's system and OEM pre-installed apps. Consider what comes pre-installed on a typical Android device: screen recorders, diagnostic tools, manufacturer accessibility services, OEM camera or gallery apps. These apps legitimately hold many of the exact high-risk permissions we're scanning for (RECORD_AUDIO, BIND_ACCESSIBILITY_SERVICE, READ_SMS on carrier builds, etc.). Scanning them would produce a flood of hits that are technically correct detections but are expected and safe on that device - not actionable false positives, but noise that slows down the scanner and overwhelms any response logic.
The solution is to skip apps that were installed by the OS or a trusted OEM store. Talsec allows you to configure a whitelist of trusted installation sources. You can set the SDK to ignore apps installed from verified stores (like com.android.vending for Google Play, com.sec.android.app.samsungapps for Samsung, Huawei AppGallery, etc.) and to exclude system-flagged apps entirely.
With those filtered out, the scanner focuses exclusively on user-installed and sideloaded apps - the realistic attack surface - where dangerous permission combinations are genuinely suspicious.
Performance and Privacy
Security tools are only useful if they don't degrade the user experience. Talsec was built with developer ergonomics and runtime performance as priorities:
100% Offline Processing (Optional): In its default configuration, all hash checks, package name blocklists, and permission evaluation happen directly on the device. No app inventory data leaves the phone.
Asynchronous Scanning: Scans run in the background asynchronously and never block the main thread or freeze the UI.
Different applications have different risk profiles. Talsec lets you combine on-device offline scanning with an optional, online App Reputation API to balance security and user friction.
1
Offline Scanning
A purely on-device, privacy-preserving layer that uses customizable hash/package blocklists and permission-based risk scoring to catch zero-days locally.
2
Handling Detections
When Talsec detects a risky app, how should your application respond? Two primary strategies:
Strategy A: Silent Business Logic (No User Friction) Process the list of suspicious apps silently in the background. Use this data on your backend for:
Dynamic Feature Blocking: Allow the user to view their account balance, but temporarily restrict sensitive actions (like outward wire transfers) until the device is clean.
Device Risk Scoring: Feed the data into your existing anti-fraud models.
Threat Intelligence: Monitor emerging fraud patterns targeting your specific user base.
Strategy B: User Involvement (Warning UI) Display a malware warning screen to alert the user why their app access is restricted.
Suggest the user to directly Uninstall the malicious app from within your UI.
Allow the user to Trust/Whitelist the app locally if they know it is legitimate.
As mentioned in the High-Security mode, when the SDK flags an unknown sideloaded app, we recommend handling it through a localized "User Trust" flow:
1
Detection
The SDK flags the sideloaded app because it is unknown to the Reputation API and requests sensitive permissions.
2
Example of Difficult Edge Case: Legitimate Sideloaded Apps
Consider a concrete example: a user legitimately downloads Kaspersky antivirus directly from the vendor's website via their browser. To the OS, this app is sideloaded and it requests high-level permissions. Under a High-Security configuration, Talsec will correctly flag it as suspicious - this is exactly the scenario the User Trust flow described above is designed for.
Why not just whitelist the package name (com.kms.free) globally? Because attackers frequently spoof package names of legitimate security apps to bypass detection. A global whitelist is a security hole. The per-device User Trust approach avoids this.
Why not send every app to the online API instead of scanning locally? The App Reputation API is effective against known threats, but it cannot catch truly novel (zero-day) malware that hasn't been cataloged yet. On-device heuristic checks (permission combos, installation source analysis) catch unknown malware the cloud database has never seen. Additionally, sending every app to the API adds network latency and unnecessary data transfer. The hybrid local-first, cloud-optional approach gives the best coverage.
How should I communicate detection results to my backend? The SDK provides structured callbacks your app can hook into. In the silent strategy, send threat metadata (detection type, risk level, number of flagged apps) to your backend as part of a device risk score. Use it to gate sensitive operations - transfers, password changes, session elevation - without user-facing alerts. In the user-facing strategy, let the user act on the warning directly in-app, and log their decision (uninstall / trust) back to your backend for audit purposes.
Can I log detected package names to my backend? Be careful here. Google Play treats the list of installed apps as personal and sensitive user data. Under the and the , transmitting app inventory data (including package names) to a remote server without prominent user disclosure and explicit consent is a policy violation. This can lead to app removal or developer account termination. Google also holds host app developers responsible for data collected by any embedded SDKs.
The recommended approach: log only anonymized or hashed threat indicators and detection verdicts (e.g., "sideloaded app with high-risk permission combo detected, risk level: high") rather than raw package names. If your security team needs raw package names for incident investigation, implement explicit user consent and a prominent privacy disclosure in your app before collecting that data.
Conclusion
By combining installation source filtering, zero-day permission analysis, and optional live reputation databases, Talsec provides targeted and effective malware detection that works within Google Play's policy constraints.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Obfuscation of Mobile Apps
This article will delve into the concept of obfuscation, explore its different types, and articulate Talsec's philosophy on its application. We believe in a balanced and pragmatic approach, prioritizing the most developer experience, app performance, exploitability of attack techniques while minimizing potential drawbacks and considering cost efficiency, to ensure both security and the smooth business operation of your mobile applications.
The primary goal of mobile app obfuscation is to render the application's code more difficult for an attacker to understand after it has been decompiled. Think of it as scrambling the blueprint of your application, making it significantly harder for someone to decipher its structure, logic, and sensitive information. While obfuscation doesn't make your application completely impenetrable – a determined attacker with enough time and resources might eventually succeed – it drastically increases the effort and expertise required, often making the attack economically unviable.
It's crucial to understand that obfuscation primarily focuses on hindering static analysis – the examination, understanding or tampering of the application's code at build time. Runtime attacks, where malicious actors attempt to manipulate the application while it's running, require a different set of defenses, which is where technologies like those offered by come into play.
Obfuscation and RASP are complementary security layers, working in tandem to provide comprehensive protection.
Deconstructing Obfuscation: Three Key Types
The concept of obfuscation can be broadly categorized into three distinct types, each targeting different aspects of the application's code:
A) Name Obfuscation for Classes, Methods, and Fields
This type of obfuscation focuses on renaming the classes, interfaces, methods, and fields within the application's code to meaningless and often short identifiers. Instead of descriptive names like UserManager, authenticateUser, or userPassword, these elements might be renamed to something like a, b, or c.
Key Concepts
Renaming: The core mechanism involves replacing meaningful names with arbitrary strings.
Reduced Readability: This significantly hinders an attacker's ability to understand the purpose and functionality of different code components simply by examining their names. It breaks the semantic link between the code and its intended behavior.
Limited Complexity: Name obfuscation is generally the least complex type of obfuscation to implement and has minimal impact on the application's performance or stability.
Example
Consider a class responsible for handling user sessions:
Copy
After class name obfuscation, this might become:
Copy
While the underlying logic remains the same, the renamed elements provide no clues to an attacker about the class's purpose or the functionality of its methods and fields.
Talsec offers a feature to ensure that this basic obfuscation technique is applied and trigger the security threat control if this was skipped at build time.
B) String Obfuscation
String obfuscation focuses on concealing string literals embedded within the application's code. These strings can often reveal sensitive information, such as API keys, Certificates, URLs, error messages, or even business logic. By obfuscating these strings, you prevent attackers from easily extracting valuable insights or identifying critical parts of your application.
Key Concepts
Encoding and Encryption: String obfuscation typically involves encoding or encrypting the string literals within the application.
Runtime Decoding/Decryption: The original strings are reconstructed at runtime, only when they are actually needed by the application.
Increased Analysis Difficulty: Attackers cannot simply search for specific keywords within the decompiled code to uncover sensitive information. They need to understand the obfuscation algorithm and potentially reverse-engineer the decoding/decryption process.
Example
Consider the following code snippet containing an API key:
Copy
An attacker examining the decompiled code would see seemingly random strings, requiring them to identify and reverse the Base64 decoding to uncover the actual API key and URL. More sophisticated techniques involving encryption would further complicate this process. Talsec provides a feature to address this need with high level data protection.
C) Control-Flow Obfuscation
Control-flow obfuscation aims to make the application's control flow – the order in which instructions are executed – more complex and difficult to follow. This is achieved by introducing artificial complexity, such as:
Key Concepts
Opaque Predicates: Inserting conditional statements whose outcome is always known at runtime but is difficult for an attacker to determine statically. This creates "dead code" paths that complicate analysis.
Bogus Code Insertion: Injecting code that has no functional impact on the application's behavior but serves to confuse and mislead attackers.
Branching and Jumps: Replacing straightforward sequential execution with a web of conditional and unconditional jumps, making it harder to trace the logical flow.
Control-flow obfuscation might transform this into a more convoluted structure involving opaque predicates and unnecessary jumps, making it harder to understand the simple conditional logic.
Warning: Code Packing and Encryption are Unsuitable for Modern Apps
Code packing and app binary encryption were once popular for protecting app binaries from reverse engineering, typically compressing executables with a runtime unpacking routine.
Today, these techniques are no longer commonly used and may be restricted by app stores. Apple requires disclosures for encryption use, while Google Play flags suspicious packing via Play Protect.
Talsec's Perspective: A Pragmatic Approach to Obfuscation
At , we firmly believe that a layered security approach is the most effective way to protect mobile applications. Obfuscation is a crucial component of this strategy, acting as a vital deterrent against static analysis. However, we also recognize the trade-offs associated with different obfuscation techniques.
Our Stance on Obfuscation Types
Class Name Obfuscation and String Obfuscation: Must-Haves for Sensitive Apps: We consider both class name and string obfuscation as essential baseline security measures for any application handling sensitive data or implementing critical business logic. The relatively low overhead and significant increase in analysis difficulty make them highly valuable in hindering casual attackers and raising the cost for more sophisticated ones. Implementing these techniques should be a standard practice in your mobile app development lifecycle.
Control-Flow Obfuscation: Reserved for Algorithm Protection: While control-flow obfuscation can offer a higher degree of protection against reverse engineering of specific algorithms, we believe its application should be carefully considered and generally reserved for scenarios where the application's core algorithm itself is a significant intellectual property asset.
The Challenges of Control-Flow Obfuscation
We acknowledge that control-flow obfuscation can introduce several complexities and potential issues:
Increased Integration Complexity: Integrating and configuring control-flow obfuscation tools can be more challenging compared to class and string obfuscation.
Potential for Non-Deterministic Bugs: The transformations applied by control-flow obfuscation can sometimes introduce subtle and hard-to-debug issues that may not manifest consistently.
Performance Impact: The added complexity in the control flow can potentially lead to performance overhead, impacting the application's responsiveness and battery consumption.
Our Recommendation for Algorithm Protection
If your application's core algorithm is a critical asset that requires a higher level of protection than class and string obfuscation can provide, we recommend a more targeted approach:
Isolate Sensitive Code: Move the algorithm's implementation to code written in a lower-level language like C or C++.
Separate Obfuscation: Apply robust obfuscation techniques specifically designed for C/C++ code to this isolated module.
Minimize Impact: By isolating the sensitive code, you limit the potential negative impacts of complex obfuscation on the main application codebase, reducing integration challenges, performance concerns, and the risk of introducing widespread bugs.
Talsec's Commitment to Comprehensive Security
While doesn't directly provide control-flow obfuscation for the main application code due to the aforementioned complexities, we are committed to offering our partners a holistic security solution.
We can recommend and facilitate integration with reliable third-party tools that specialize in obfuscation enabling you to effectively protect your most critical algorithms without compromising the stability and maintainability of your primary application code.
Conclusion
Obfuscation is an indispensable tool in the mobile app security arsenal. By making your application's code significantly harder to understand, you deter attackers and protect your intellectual property and sensitive data.
advocates for a pragmatic approach, emphasizing the crucial role of class name and string obfuscation as fundamental security layers for all sensitive applications. While acknowledging the potential benefits of control-flow obfuscation for specific algorithm protection, we recommend a targeted strategy involving isolating sensitive code in C/C++ and applying specialized obfuscation tools to minimize risks and ensure a robust and stable application.
At Talsec, we are dedicated to providing you with the tools and knowledge necessary to build secure and resilient mobile applications. By understanding the nuances of obfuscation and adopting a layered security products , , , and carefully chosen obfuscation techniques, you can significantly enhance your application's defenses against the ever-evolving threat landscape.
TechTalk: Threshold Cryptography with Jan Kvapil (MUNI)
The Talsec Mobile App Security Conference in Prague was a two-day, invite-only event on fraud, malware, and API abuse in modern mobile apps, held at Chateau St. Havel on November 3–4, 2025, and hosted by Talsec, freeRASP, and partners. It brought together leading experts and practitioners to strengthen the mobile AppSec community, connect engineers with attackers and defenders, and share practical techniques for high‑stakes sectors like banking, fintech, and e‑government.
Jan Kvapil delivered a keynote on threshold cryptography, presenting it as an orthogonal defense mechanism against single-point-of-failure attacks, particularly in high-security applications like mobile banking.
Key Protection in a Compromised World: An Introduction to Threshold Cryptography
Many mobile banking applications rely on a single private key stored on a user’s device to represent digital identity and authorize transactions. The backend verifies each transaction by validating a digital signature generated using this key.
This model fails when an attacker compromises the device. If advanced malware or a zero-day exploit bypasses all defenses—including RASP, anti-malware protections, and even hardware-backed key storage—the attacker gains access to the private key. With that key, fraudulent transactions can be digitally signed in a way that appears fully legitimate to the backend. The risk escalates further if the attacker exfiltrates the key, enabling continued abuse even after the device is secured or replaced.
Threshold Cryptography: Splitting the Secret
Threshold cryptography (TC) mitigates single-device compromise by splitting a private key into multiple cryptographic shares. These shares are distributed across multiple devices, such as a user’s phone, laptop, a trusted partner’s device, or a backend service. The full private key never exists in one place.
When a transaction requires authorization, participating devices perform a coordinated cryptographic protocol to jointly produce a digital signature. The resulting signature remains fully backward-compatible: the backend verifies it exactly as it would a signature generated by a single private key, requiring no changes to existing verification logic.
Defense Against Device Compromise
Threshold cryptography prevents attackers from forging valid signatures when only one device is compromised. Possession of a single key share provides no ability to sign transactions independently, and any attempt to do so fails backend verification.
In multi-party configurations, signing also requires active participation from other trusted devices or parties. This design introduces an additional layer of protection, as suspicious signing requests can be detected and blocked by other participants. A successful attack requires compromising enough devices to meet the configured threshold, such as both devices in a two-out-of-two setup.
Underlying Principles: Shamir Secret Sharing
Threshold cryptography commonly relies on Shamir Secret Sharing. This method splits a secret into n shares such that only a minimum threshold t can reconstruct or use the secret, while any group smaller than t gains no information. The concept is often illustrated geometrically: just as two points are required to define a line and determine its intercept, multiple shares are required to recover or act on the secret value.
The Current State of Threshold Cryptography
Threshold cryptography has existed since the 1980s and 1990s and reflects long-standing real-world security practices, such as requiring multiple keys to access sensitive assets. Adoption is accelerating due to several factors:
Cryptocurrency: Threshold cryptography protects against irreversible loss of funds caused by lost private keys.
National Authentication Systems: Estonia uses a two-out-of-two RSA signing scheme split between a national authority and the user.
Standardization: NIST is actively soliciting proposals for multi-party threshold cryptography, driving interoperability and broader adoption.
The MeeSign Platform
The MeeSign platform demonstrates threshold cryptography in practice. It is a fully open-source, proof-of-concept implementation available on GitHub and intended for developers and security teams rather than production use.
Key capabilities include:
Cross-Platform Support: Built with Flutter and running on Android, Windows, macOS, and Linux.
Integration Demonstrations: Compatibility with standard interfaces such as PKCS#11 enables use cases like SSH login signing.
Flexible Group Configuration: Users can define participant groups, set signing thresholds (e.g., two out of three), and select cryptographic protocols.
Conclusion
Threshold cryptography provides an effective defense against single-key compromise by requiring multiple devices or parties to participate in cryptographic decisions. When strong private key protection is essential, distributing trust across devices significantly raises the bar for attackers and reduces the impact of individual device compromise.
Thank Jan you for showcasing how threshold cryptography can protect digital identities by eliminating single-key failure points. Your work demonstrates how multi-device signing and shared trust significantly raise the bar for attackers and highlights why modern security architectures must move beyond device-centric key protection models as well!
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
How to Detect Developer Mode on Android using Kotlin
Struggling to protect your app from Developer Mode risks? Here’s how to fight back.
Developer Mode (or Developer Options) on Android is a handy tool for testing apps—but in the wrong hands, it’s a gateway for reverse engineering, debugging, and tampering. If attackers run your app in Developer Mode, they can more easily analyze or modify its behavior. Luckily, SDKs can detect and block this threat for you.
What is Developer Mode?
Developer Mode is a built-in Android setting that unlocks advanced debugging features. While useful for legitimate testing, it also lowers the security barrier for attackers. With Developer Options enabled, attackers can:
Attach a debugger and inspect runtime data.
Use USB debugging to inject commands or bypass protections.
Modify system behaviors that apps normally rely on for safety.
Statistics
Our data shows that around 2% of devices have developer mode enabled.
More actual global data can be found at .
How to Detect Developer Mode?
DIY Coding Guide
You can implement yourself simple Developer Mode detection like this:
Use freeRASP (free library by Talsec)
Actively maintained ()
Comes with like app integrity, Frida and hooking, emulators, debugging, screenshots, etc.
Used by 6000+ apps; #1 Mobile RASP SDK by popularity ()
Integration Example
Add the in your project, focus on implementing the following callback:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Developer Mode enables advanced debugging features that attackers can exploit to reverse engineer, tamper, or bypass app protections. Detection doesn’t have to be DIY or error-prone—simple flag checks are easily bypassed. Tools like freeRASP provide reliable, continuously updated detection with 14+ advanced checks, letting you respond proactively to threats like Developer Mode, root, Frida, emulators, and more.
👉 If you want Developer Mode detection plus root, jailbreak, emulator, debugging, screenshot, and tampering protection in one free package, start with .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
How to Detect Screen Capture & Recording using Kotlin
Stop data leaks before they happen. Protect your Android app from unwanted screenshots and recordings.
Screenshots and screen recordings may seem harmless, but in sensitive apps (banking, fintech, healthcare, messaging), they can expose confidential user data. Luckily, modern tools make it possible to detect and respond to these risks effectively.
What is Screen Capture & Recording?
Screen capture/recording refers to when users take screenshots or record your app’s screen. While capturing itself doesn’t impose threat, malicious actors can exploit it to steal sensitive information.
Attackers often use:
Built-in Android screenshots/recording tools
Third-party screen recorder apps
Malware that captures the screen without consent
Don't forget about possibility to take photo of a phone screen in real life.
Statistics
This problem is not as insignificant, as it looks like. Our data shows, that around 1.5% of devices on which screenshot was detected; and 0.1% where recording was detected.
More actual global data can be found at .
How to Detect Screen Capture/Recording?
Detecting screen capture is tricky, since Android doesn’t offer a universal system-level API for all cases. DIY methods (like flagging windows with FLAG_SECURE) work only partially and can break user experience.
To provide reliable and strong detection, it's good idea to use specialised, continuously updated SDKs.
These can provide:
Newly updated detection techniques
Deeper check of device
Nice API for end developer to interact with, rather than reinventing a wheel
freeRASP (by Talsec)
Strong screenshot and screen recording detections
Actively maintained ()
Comes with like app integrity, Frida and hooking, emulators, debugging, screenshots, etc.
Integration Example
Add the in your project, focus on implementing the following callback:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Protect your Capacitor app from runtime attacks like Frida and Xposed with smart detection.
Imagine your app’s security is a locked vault. What if an attacker could pick the lock and alter its contents while it’s actively being used? That is exactly what a hooking attack does.
This runtime threat is uniquely dangerous for hybrid frameworks like Capacitor. Because Capacitor relies on a communication bridge to pass data between the web layer (WebView) and Native code (Java/Swift), there are multiple points of entry for attackers to intercept data, manipulate logic, or extract API keys.
What is Hooking?
How to Detect Hooking (Frida) on Flutter
Protect your Flutter app from runtime attacks like Frida and Xposed with smart detection.
Imagine your app's security is a locked vault. What if an attacker could pick the lock and alter its contents while it's actively being used? That, in essence, is what a hooking attack does. This runtime threat is particularly dangerous for Flutter apps, but you can defend against it effectively.
What is Hooking?
Hooking is a technique where attackers use tools like “intercept” and modify your app's normal operations as they happen. Think of it like a spy intercepting a mail carrier, reading a sensitive message, and even changing it before it's delivered.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Next, the system checks where the app was installed from. Apps from trusted sources (Google Play Store, Samsung Galaxy Store, Huawei AppGallery, etc.) receive a trusted badge and skip the deep inspection stages. Apps sideloaded from a browser or unknown source continue down the pipeline.
3
On-Device Detector
Sideloaded apps are scanned locally on the device. The detector analyzes the app's requested permission combinations, looking for high-risk patterns (e.g., an app requesting READ_SMS + BIND_ACCESSIBILITY_SERVICE + BIND_DEVICE_ADMIN).
4
App Reputation API
For a final verdict, the local findings can optionally be sent to a cloud-based App Reputation API. The API cross-references the app against a global threat intelligence database. If the app turns out to be benign (false alarm), it is cleared. If confirmed as a threat (RAT, SMS stealer, overlay trojan), the system triggers a block.
BIND_DEVICE_ADMIN: Requested by malware that needs persistent device control - used to prevent uninstallation or enforce remote lockout.
REQUEST_INSTALL_PACKAGES: The hallmark of dropper apps that download and silently install additional malicious payloads after the initial infection.
QUERY_ALL_PACKAGES: Used by spyware to enumerate all installed apps and map the device before targeting specific banking or authentication apps.
Default SMS handler: The app registered as the default SMS application has OS-level read and send access to all messages without per-message permission prompts. A malicious app that becomes the default SMS handler can silently forward OTPs as they arrive.
Default phone/dialer app: The app registered to handle outgoing calls can intercept, record, and manipulate call flows without triggering visible permission requests at call time.
Device Administrator apps (Settings > Security > Device Admin Apps): Apps holding device admin rights can lock the screen, enforce password policies, and resist uninstallation. This is the persistence mechanism behind screen-locking ransomware - the user cannot simply uninstall the app without first revoking its admin rights.
Remote access / screen sharing apps: Apps capable of capturing and streaming the device screen (Remote Access Tools, or RATs) can be present on a device without obvious malicious permissions declared in their manifest. Detecting them requires behavioral analysis that goes beyond manifest inspection.
Relies solely on Talsec's live App Reputation API. It only flags recognized malware. Near 0% false positives - allows teams to automatically block critical actions when malware is detected without risking false blocks on legitimate users.
3
High-Security Mode
Combines the live database with zero-day permission checks. If an app is unknown to the database but was sideloaded and requests high-risk permissions, it triggers a flag.
Interaction
Your app displays a bottom sheet or dialog to the user: "We detected an app from an unofficial source with potentially dangerous permissions. If you trust this app (e.g., Kaspersky from the official site), you can mark it as safe."
3
Local Whitelisting
If the user confirms, call the SDK method:
This whitelists the app on that specific device without compromising the security of your entire user base.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Generic Applicability: This technique is indispensable and straightforward, as it is a complimentary compiler feature that yields a significant security advantage.
Exception Handling Abuse: Using exception handling mechanisms in non-standard ways to alter the control flow.
State Machine Transformation: Converting linear code sections into complex state machines, obscuring the original logic.
Their decline is largely due to widespread misuse by malware and incompatibility with modern app distribution policies.
App Store Review Issues: Aggressive control-flow obfuscation techniques can sometimes be flagged by app store review processes due to the significant code modifications they introduce.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Global Threat Rate for Screenshot (source my.talsec.app)
Global Threat Rate for Screen Recording (source my.talsec.app)
Hooking allows malicious actors to inject their own code into your running process using dynamic instrumentation toolkits like Frida. It’s akin to wiretapping a phone line: the attacker sits between the operating system and your app logic, listening to every instruction and modifying them at will.
This grants them the ability to:
Nullify Defenses
Instantly turn off jailbreak checks, SSL pinning, or biometric requirements.
Siphon Secrets
Capture unencrypted API tokens, passwords, or JWTs right from system memory before they are stored or transmitted.
Alter Logic
Rewrite the return values of your functions (e.g., forcing a checkPaymentSuccess() function to always return true).
How common is hooking?
About 0.05% of devices are hooked. If your Capacitor app manages payments, PII (Personally Identifiable Information), or competitive gaming logic, you are a target.
Capacitor developers often face a "sandbox" problem. The JavaScript environment where your Ionic/Angular/React code lives has no visibility into the low-level operating system processes. You cannot ask the browser window if the kernel is being tampered with.
To detect these threats, you must leave the web layer and implement checks within the Native Android/iOS layer.
DIY Coding Guide
Since you cannot detect hooking from index.ts, you would need to build a custom Capacitor Plugin. A common "Do It Yourself" method involves scanning for network ports typically associated with the Frida server.
Here is what that logic looks like in the Java layer of a custom plugin:
Use freeRASP (free library by Talsec)
Instead of building custom plugin, can use freeRASP. It provides a multi-layered shield that runs deep in the native code, protecting your Capacitor bridge from the inside out.
Deep Inspection: It monitors for suspicious libraries, memory tampering, dynamic code injection, and process anomalies—not just open ports.
Resilience: Designed to detect when it is being tampered with (anti-tamper checks).
Offline First: Does not require an active internet connection to protect the device.
Battle Tested: Currently protecting over 6000+ apps globally.
Integration Example
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Hooking is a sophisticated attack vector that renders standard encryption and local storage protections useless by modifying your app's behavior in real-time. freeRASP bridges this gap, offering a free, enterprise-grade security layer that watches for Frida, rooting, and tampering, allowing you to secure your hybrid app with native-level confidence.
👉 Secure your hybrid app today against hooking, rooting, and debugging with the freeRASP Capacitor Plugin.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
This gives attackers the power to:
Disable security measures, such as license checks or in-app purchase validations.
Extract secrets like API keys directly from your app's memory.
Inject malicious code to commit fraud or steal user data.
Attackers often target the native Swift or Kotlin code within a Flutter application to manipulate its behavior.
How common is hooking?
About 0.05% of devices are hooked. That may sound small, but at global scale it still means millions of devices. If your app handles sensitive data, you can’t ignore this risk.
You might be tempted to build your own defenses, like searching for frida-server processes or blocking suspicious network ports. Unfortunately, these simple checks rarely work for long. The developers of hooking frameworks are constantly updating their tools to be stealthier and evade these exact kinds of naive detections.
This creates a high-stakes cat-and-mouse game—one that requires constant vigilance and deep expertise. That's why relying on a specialized security SDK is the safer and more effective choice.
DIY Coding Guide
You can implement yourself simple Frida server detection. Frida often uses ports like 27042 and 27043.
Use freeRASP (free library by Talsec)
With freeRASP, the hook detection utilizes hundreds of advanced checks, offering robust detection even with bypass scripts applied.
Process-name checks, suspicious open ports, injected or loaded Frida-related libraries, modified memory maps, abnormal function hooks and our secret sauce.
Very strong detections including root and jailbreak detections (Magisk, Dopamine)
Comes with like app integrity, root/jailbreak, emulators, debugging, screenshots, etc.
Trusted by
Integration Example
Add the in your project, focus on implementing the following callback:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Hooking attacks using tools like Frida can intercept and modify app behavior in real time, exposing sensitive data and disabling protections. Detection doesn’t have to be DIY or error-prone—simple port checks are easily bypassed. Tools like freeRASP provide reliable, continuously updated detection with hundreds of advanced checks, letting you respond proactively to runtime threats.
👉 If you want hooking detection plus root, jailbreak, emulator, debugging, screenshot, and tampering protection in one free package, start with .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
Protect your React Native app from runtime attacks like Frida and Xposed with smart detection.
Imagine your app’s security is a locked vault. What if an attacker could pick the lock and alter its contents while it’s actively being used? That is exactly what a hooking attack does.
This runtime threat is uniquely dangerous for React Native. Because React Native relies on a "Bridge" to communicate between JavaScript and Native code (Java/Obj-C), there are multiple points of entry for attackers to intercept data, manipulate logic, or extract API keys.
What is Hooking?
Hooking is a technique where attackers use tools like Frida to "intercept" and modify your app’s normal operations as they happen. Think of it like a spy intercepting a mail carrier, reading a sensitive message, and changing it before it is delivered to your server.
This gives attackers the power to:
Bypass security: Disable jailbreak detection, biometric login, or SSL pinning.
Steal Data: Read unencrypted strings from memory (like JWTs or API keys).
Function Tampering: Force a function like isUserPremium() to always return true.
How common is hooking?
About 0.05% of devices are hooked. While this percentage seems low, at a global scale, it represents millions of compromised devices. If your app handles financial data, health records, or competitive gaming integrity, this is a risk vector you cannot ignore.
Check out our live global stats at
How to Detect Hooking?
You might be tempted to build your own defenses. In React Native, this is slightly more complex than in Node.js because the JavaScript runtime (Hermes or JavaScriptCore) does not have built-in access to low-level TCP sockets.
To implement a detection mechanism yourself, you need to step outside standard React Native capabilities.
DIY Coding Guide
In React Native, detecting hooking is harder than in frameworks like Flutter because the JavaScript environment (Hermes or JSC) does not have direct access to low-level sockets or the OS filesystem. You cannot simply "check a port" from your App.js without installing extra libraries or writing Native Modules.
However, if you were to implement a naive check yourself, you would typically write a Native Module to check for common Frida ports (like 27042)
Why this fails
Attackers can run Frida on a random port.
Attackers can rename the Frida process.
Attackers can hook your "detection" function and force it to return false.
Use freeRASP (free library by Talsec)
With freeRASP, the hook detection utilizes hundreds of advanced checks, offering robust detection even with bypass scripts applied.
Process-name checks, suspicious open ports, injected or loaded Frida-related libraries, modified memory maps, abnormal function hooks and our secret sauce.
Very strong detections including root and jailbreak detections (Magisk, Dopamine)
Integration Example
Add the in your project, focus on implementing the following callback:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Hooking attacks using tools like Frida can intercept and modify app behavior in real time, exposing sensitive data and disabling protections. Detection doesn’t have to be DIY or error-prone—simple port checks are easily bypassed. Tools like freeRASP provide reliable, continuously updated detection with hundreds of advanced checks, letting you respond proactively to runtime threats.
👉 If you want hooking detection plus root, jailbreak, emulator, debugging, screenshot, and tampering protection in one free package, start with .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Learn what root detection is, how rooted Android devices increase security risk, and how to choose the right root detection solution for your mobile app, from free tools to advanced RASP protection.
Introduction: Root Detection Basics
How to Detect Jailbreak on Flutter
Jailbroken devices open doors for attackers. Here’s how you can secure your Flutter app.
A jailbroken iPhone is like leaving the front door of your house unlocked: attackers can enter, change things, and take what they want. If your Flutter app runs on a jailbroken device, it may be easier to reverse-engineer, tamper with, or run in an unsafe environment. Detecting jailbreaks early helps you protect user data and preserve app integrity.
What is jailbreaking?
Jailbreaking removes iOS restrictions and grants root (privileged) access to the device (similar to ). With root access, users (or attackers) can install unauthorized apps, change system settings, and bypass App Store protections. This lets them install unauthorized apps, tweak system settings, or bypass App Store policies. Common jailbreak tools include
How to Detect VPN using Kotlin
Struggling with hidden VPN traffic in your app? Here’s how to spot it before attackers exploit it.
VPNs aren’t inherently bad—but in mobile security, they often mask fraud, location spoofing, or data exfiltration. If your app deals with sensitive data, you need a way to know when a VPN is in play. Thankfully, there is tooling which makes VPN detection straightforward in Kotlin apps.
What is VPN?
A VPN (Virtual Private Network) encrypts traffic and routes it through remote servers. While this protects privacy, it can also help attackers:
Introducing Multi-Instancing Detection for freeRASP
New version of comes with new feature: of Parallel Space. What is multi-instacing, why is it an issue and how to detect it?
What is Multi-Instancing?
Multi-instancing allows multiple instances of the same application to run simultaneously on a single Android device. Normally, Android permits only one instance of an app. Users can bypass this limitation using third-party cloning tools, virtualization apps, or modified Android environments. Each instance operates independently with separate data storage, user accounts, and app state.
public class UserSessionManager {
private String loggedInUsername;
private boolean isLoggedIn;
public boolean authenticateUser(String username, String password) {
// Authentication logic
}
public String getLoggedInUsername() {
return loggedInUsername;
}
}
public class a {
private String b;
private boolean c;
public boolean d(String e, String f) {
// Authentication logic
}
public String g() {
return b;
}
}
String apiKey = "YOUR_SUPER_SECRET_API_KEY";
String apiUrl = "https://api.example.com/data";
After string obfuscation, this might look like:
Java
String apiKey = new String(Base64.getDecoder().decode("WU9VX1NVUEVSX1NFQ1JFVF9BUElfS0VZ"));
String apiUrl = new String(Base64.getDecoder().decode("aHR0cHM6Ly9hcGkuZXhhbXBsZS5jb20vZGF0YQ=="));
fun isDeveloperModeEnabled(context: Context): Boolean {
// "development_settings_enabled" is available from API 17+
return try {
android.provider.Settings.Secure.getInt(
context.contentResolver,
android.provider.Settings.Global.DEVELOPMENT_SETTINGS_ENABLED
) == 1
} catch (e: Throwable) {
false
}
}
// Usage:
// val isDevMode = isDeveloperModeEnabled(context)
private val deviceStateListener = object : ThreatListener.DeviceState {
...
override fun onDeveloperModeDetected() {
TODO("Not yet implemented")
}
}
Talsec.start(applicationContext)
override fun onScreenshotDetected() {
Log.w("freeRASP", "Screenshot detected!")
// Optionally block sensitive actions or warn the user
}
override fun onScreenRecordingDetected() {
Log.w("freeRASP", "Screenshot detected!")
// Optionally block sensitive actions or warn the user
}
// Android (Java) logic inside a custom Capacitor Plugin
private boolean isFridaServerRunning() {
// Frida default ports
int[] suspiciousPorts = {27042, 27043};
for (int port : suspiciousPorts) {
try {
// Attempt to connect to the local port
Socket socket = new Socket("127.0.0.1", port);
socket.close();
// If we connected, something is listening there -> Danger
return true;
} catch (IOException e) {
// Port is closed, usually safe
}
}
return false;
}
import { startFreeRASP } from 'capacitor-freerasp';
// reactions for detected threats
const actions = {
// Android & iOS
privilegedAccess: () => {
console.log('privilegedAccess');
},
}
const config = ...
// returns `true` if freeRASP starts successfully; you can ignore this value
const started = await startFreeRASP(config, actions);
import 'dart:io';
Future<bool> detectFridaPorts() async {
final portsToCheck = [27042, 27043];
for (var port in portsToCheck) {
try {
final socket = await Socket.connect("127.0.0.1", port,
timeout: const Duration(milliseconds: 200));
socket.destroy();
print("Frida-like service detected on port $port");
return true;
} catch (_) {
// Port not open, ignore
}
}
return false;
}
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
RASP+ - An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & AppiCrypt for Web - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Imagine having the keys to your Android kingdom — rooting your device gives you exactly that level of control. Rooting (gaining privileged “superuser” access) lifts the built-in restrictions of the Android operating system, allowing you to modify system files, install unauthorized apps, and customize your device in ways that ordinary users can’t.
However, this freedom is a double-edged sword — bypassing Android’s security safeguards also exposes the device to serious risks.
With root access, malware or malicious apps have a much easier time breaching your phone’s defences, potentially compromising sensitive data and system integrity.
In short, rooting grants great power over your device, but it also brings great responsibility (and danger) in terms of security.
From an app developer standpoint, a rooted device isn’t just the owner’s concern — it’s a red flag for any application running on it. When a device is rooted, attackers or even curious users can bypass app-level restrictions, tamper with code, or steal data that would normally be shielded by Android’s sandbox. To combat these threats, developers employ root detection mechanisms to determine if an app is running on a rooted (and thus potentially compromised) device.
Many security-critical apps — from mobile banking to corporate email clients — will restrict functionality or refuse to run altogether if they detect a rooted device, in order to safeguard data and prevent fraud
Implementing such detection is easier said than done, however. Sophisticated rooting tools can hide their tracks to evade detection, creating a cat-and-mouse game between app security defences and would-be attackers
This constant battle makes it clear why strong root detection is crucial for anyone serious about Android security and app protection.
In the sections that follow, we’ll explore both sides of this coin — the allure of rooting and the necessity of root detection. We begin by demystifying the concept of rooting and the privileges it grants (along with the risks involved). Next, we delve into the security dangers posed by rooted devices and explain what root detection is and why it’s so important. From there, we’ll examine how root detection works under the hood and the challenges developers face in staying ahead of clever root hideing techniques. We’ll also discuss best practices for implementing root detection in apps and introduce some popular tools and services that can help. By the end, you’ll have a clear understanding of why rooting appeals to many Android enthusiasts yet comes with significant security trade-offs — and why robust root detection mechanisms are an essential safeguard for keeping your apps and data safe
Pro's and Con's of Popular Root Detector Solutions (free and paid)
Choose the root detection solution that aligns with your goals. Free tools like RootBeer, freeRASP, or Play Integrity provide basic protection — but premium offerings like Talsec RASP+ bring robust features and peace of mind.
Root Detection Solution
Pros
Cons
(free, open-source, in-app, used by 5000+ apps)
Open-source library with simple integration
Checks for common root indicators
Easily bypassed by tools like UnRootBeer or custom kernels
Relies on predefined threat lists, missing newer root methods
Prone to false positives
(free, reliable, in-app, used by 6000+ apps)
Actively maintained with frequent updates
Detects root/jailbreak indicators and common hiding tools (Magisk/Shamiko)
Lightweight integration
Less resilient to bypass compared to paid (binary not app-bound)
Adds 4 MB to app size
Sends threat data to Talsec-managed servers by default
Steal sensitive user data (tokens, stored credentials).
Disable or bypass security controls inside the app.
Run debuggers and hooking frameworks (like ) to modify runtime behavior.
If your app runs without detection on such devices, its integrity is at serious risk.
How to Detect Jailbreak?
Historically, developers looked for signs like the presence of Cydia to detect jailbreak. Modern attackers adapt quickly, hide artifacts, and use tools to bypass naive checks. DIY methods become outdated fast — what worked last month may fail today.
Rather than building brittle checks, use a maintained solution that combines many signals and is actively updated.
DIY Coding Guide
You can implement yourself simple jailbreak detection like this:
freeRASP (free library by Talsec)
With freeRASP, the jailbreak detection utilizes hundreds of advanced checks, offering robust detection even with hiding methods applied.
Offline operation with minimal performance overhead.
A suite of additional (app integrity, runtime manipulation such as hooking, emulator detection, debugger/screenshot detection, etc.).
Trusted by
Intergration Example
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
Jailbroken iOS devices remove critical restrictions, giving attackers privileged access to inject malicious code, steal sensitive data, and bypass in‑app protections. Detection doesn’t have to be DIY or error‑prone—simple checks like looking for Cydia are outdated and easily bypassed. Tools like freeRASP provide reliable, continuously updated detection with strong signals against modern jailbreaks, letting you respond proactively to protect user data and app integrity.
👉 If you want jailbreak detection plus root, Frida, emulator, debugging, screenshot, and tampering protection in one free package, start with freeRASP by Talsec.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
Bypass geo-restrictions (e.g., accessing services from unsupported countries)
Hide malicious activity like bot traffic or credential stuffing
Exfiltrate sensitive data undetected
Attackers often use common VPN apps (NordVPN, ExpressVPN, ProtonVPN) or system-level tunnels to disguise their actions. From a security perspective, detecting VPN usage is like knowing if a user is “wearing a mask”—it doesn’t always mean they’re hostile, but it changes the trust level.
Usage of VPN does not automatically impose threat.
How to Detect VPN Usage?
Detecting VPNs isn’t trivial—many providers change IPs, use stealth protocols, or blend with normal traffic. DIY solutions (like hardcoding VPN IP ranges) are unreliable and outdated quickly.
Instead, use expert SDKs that:
Actively monitor for VPN interfaces and tunnels
Stay updated against new evasion techniques
Provide callbacks so your app can respond instantly
Popular Libraries for VPN Detection
freeRASP (by Talsec)
The robust, developer-friendly and free choice for Android.
Add the freeRASP in your project, focus on implementing the following callback:
Malwarelytics for Android
Aside from VPN detection, it also contains additional security checks
Enterprise grade of checks
Might be expensive for small apps
Integration Example:
Comparison Table
Feature
freeRASP
Malwarelytics
Works Offline
Yes
Yes
Easy Integration
Yes
Yes
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
VPN detection is crucial for apps where fraud, compliance, or region-locking matter. Manual solutions fall short—but freeRASP gives Kotlin developers a lightweight, reliable SDK to stay ahead of attackers.
👉 If you want VPN detection plus root, Frida, emulator, and tampering protection in one free package, start with freeRASP by Talsec.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
Why Multi-Instancing Might be Bad
Fraud and Abuse
Malicious actors can bypass "one-per-device" limitations for promotional offers, contests, or referral programs. They can create numerous fake accounts to generate fraudulent ad clicks, post fake reviews, or manipulate voting systems.
Security Bypass
For high-security apps like banking or enterprise software, multi-instancing poses a significant threat. An attacker could use the sandboxed environment to analyze the app's behavior, attempt to bypass root detection, or tamper with its data in a controlled setting.
Privacy Risks
The cloner app itself acts as a Man-in-the-Middle (MITM). Applications like Parallel Space have the (technical) ability to read, modify, and log all data from the "cloned" app. This includes login credentials, private messages, and financial information.
How Does Multi-Instancing Work?
Achieving multi-instancing can be achieved using different techniques:
Work Profile
Feature of Android which allows users to separate personal and work-related apps, data and settings on the same device by creating a secure container. Each work profile has its own user ID, creating a distinct environment that keeps data isolated.
App Clonning
Works by modifying the package name of the application. Android then sees these applications as separate.
Manufacturer Feature
Some manufacturers provide before-mentioned app clonning as system feature (like Xiaomi Dual App)
Third-Party Apps
There are applictions like Parallel Space which may use some other technical solution than app clonning.
How Does Parallel Space Work?
Parallel Space has a bit unique approach to multi-instancing. Instead of clonning an app, it creates a sandboxed, virtualized environment — container — on the user's device. When you "clone" an app Parallel Space does the following:
Creates an Isolated Space
It sets up a dedicated directory structure for the cloned app, separate from the original app's data.
Intercepts and Proxies Calls
The cloned app runs inside this container. Every system request it makes—for file access, contact lists, network connections, or hardware IDs—is intercepted by Parallel Space.
Remaps Resources
Parallel Space then forwards these requests to the Android operating system, but it modifies them to prevent conflicts. For example, it directs file read/write operations to its own sandboxed directory, not the original app's directory.
This approach effectively hides "cloned" app. To the Android OS, only one app is running - Parallel Space. Virtual app is just a process running within Parallel Space container.
Detecting Parallel Space using freeRASP
New version of freeRASP allows easy detection of multi-instancing. freeRASP can detect multi-instancing using Parallel Space. New callback to threat — onMultiInstance . Currently, freeRASP can detect multi-instancing using Parallel Space, with more detection techniques coming soon:
You can find this feature in current newest version of freeRASP:
Protect your React Native app from compromised iOS environments with smart detection.
Imagine you built a high-security facility, but one of your users decided to remove all the doors and disable the alarm system because they wanted "full control" over the building. That is essentially what a Jailbreak does to an iOS device.
Jailbroken environment is a critical security risk. It removes the OS sandbox, allowing malicious actors (or even just buggy tweaks) to access your app's private data, Keychain items, and internal logic.
What is Jailbreak?
Jailbreaking is the process of unlocking an iOS device to remove Apple's built-in restrictions. Much like , it gives users full administrative (root) access. This allows for the installation of apps outside the App Store and deep customization of system settings. Popular tools used to achieve this include , , or .
On a jailbroken device, attackers can:
Inject malicious code into your app.
Steal sensitive user data (tokens, stored credentials).
Disable or bypass security controls inside the app.
How to Detect Jailbreak?
You can either implement your own jailbreak detection logic or use a dedicated, specialized security SDK. Building your own solution gives you full control over what you check and how you integrate it into your app. However, modern mobile environments are complex, and attackers increasingly use advanced hooking and masking techniques that can make straightforward checks less reliable.
Security SDKs address this by combining multiple detection signals, maintaining broader coverage, and continuously adapting to new techniques. As a result, many teams choose a specialized SDK to reduce maintenance effort and ensure more consistent, robust detection across a wide range of scenarios
DIY Coding Guide
The most robust "DIY" way to detect a jailbreak in React Native is to look for specific files and directories known to be created by jailbreak tools (Cydia, Unc0ver, Checkra1n).
Prerequisites: You will need a library to access the file system. react-native-fs is the standard choice.
You can create a utility function that iterates through a list of "suspicious" paths. If any of them exist, the device is likely jailbroken.
freeRASP (free library by Talsec)
With freeRASP, the jailbreak detection utilizes hundreds of advanced checks, offering robust detection even with hiding methods applied.
Strong detections for modern jailbreaks .
and frequent updates.
Offline operation with minimal performance overhead.
Integration Example
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Key Takeaway
A jailbroken device is a compromised device. For React Native apps holding sensitive user data, ignoring this risk is dangerous.
DIY is cat-and-mouse
Checking for files like /Applications/Cydia.app is easily bypassed by "Hide Jailbreak" tweaks.
Use specialized tools
Libraries like freeRASP use multi-layered checks (permissions, protocol handlers, system calls) to detect jailbreaks even when they are hidden.
React Proactively
Don't wait for a data breach; detect the compromised environment immediately on app launch.
If you want Jailbreak detection plus many more protections in one free package, start with .
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Preventing Piracy and Cheating in Games: A Guide to Countering GameGuardian with Talsec
What is GameGuardian?
Game cheating has long been a thorn in the side of mobile game developers, undermining the very integrity of their creations. While the PC gaming landscape has evolved with sophisticated, kernel-level security like Riot's Vanguard and Valve's Anti-Cheat, the Android ecosystem faces its own persistent threats. On this front, notorious tools like GameGuardian continue to hand attackers a God mode allowing them to rewrite the rules of any vulnerable game. Instead of earning their victories through skill, users deploy this tool to scan a game's live memory and directly manipulate critical values such as coins, health, or
Understanding the Fundamentals of Obfuscation
The primary goal of mobile app obfuscation is to render the application's code more difficult for an attacker to understand after it has been decompiled. Think of it as scrambling the blueprint of your application, making it significantly harder for someone to decipher its structure, logic, and sensitive information. While obfuscation doesn't make your application completely impenetrable – a determined attacker with enough time and resources might eventually succeed – it drastically increases the effort and expertise required, often making the attack economically unviable.
It's crucial to understand that obfuscation primarily focuses on hindering static analysis – the examination, understanding or tampering of the application's code at build time. Runtime attacks, where malicious actors attempt to manipulate the application while it's running, require a different set of defenses, which is where technologies like those offered by come into play.
Obfuscation and RASP are complementary security layers, working in tandem to provide comprehensive protection.
// Android (Java) Example for a React Native Module
private boolean detectFridaPorts() {
int[] portsToCheck = {27042, 27043};
for (int port : portsToCheck) {
try {
Socket socket = new Socket("127.0.0.1", port);
socket.close();
// If connection succeeds, the port is open (suspicious)
return true;
} catch (IOException e) {
// Port is closed, which is good
}
}
return false;
}
Talsec.start(applicationContext)
override fun onVpnDetected() {
Log.w("freeRASP", "VPN connection detected!")
// Optionally block sensitive actions or warn the user
}
val raspObserver = object : RaspObserver {
// The callback is delivered on a background thread
override fun onVpnDetected(vpnEnabled: Boolean) {
// Handle VPN detection
}
// Handle detection of other RASP features
}
val listener = object : ThreatListener.ThreatDetected {
// ...other callbacks...
override fun onMultiInstanceDetected {
// Reaction
}
}
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
RASP+ - An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & AppiCrypt for Web - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
, completely shattering the intended game balance.
How GameGuardian Shatters Fair Play
GameGuardian's method of attack is deceptively simple yet profoundly damaging: it latches onto a live game process, acting as an unauthorized window into its memory. Once attached, it can read and rewrite data at will, leading to a cascade of devastating consequences that compromise a game on financial, social, and technical levels.
Economic Collapse
Financially, the tool triggers an economic collapse by rendering in-app purchases (IAPs) worthless. When attackers can grant themselves infinite premium currency or exclusive items, the entire revenue model that funds the game's development and maintenance is destroyed, as there is no longer any incentive for legitimate purchases.
Erosion of Fair Play
Socially, it erodes the foundation of fair play . The tool devalues the time and dedication of legitimate players, causing immense frustration. This is especially catastrophic in multiplayer games, where cheaters with manipulated stats can dominate competitions, making the experience unplayable for the honest community and leading to a mass exodus of players.
Compromised Integrity
Technically, it compromises the game's core integrity . The danger goes far beyond simple currency cheats. Attackers can alter any unprotected memory value to grant themselves impossible speed, create one-hit-kill weapons, or bypass entire questlines by changing a single variable. In essence, it turns the game's carefully designed rules into a broken and exploitable sandbox.
From Zero to Infinite: A Practical GameGuardian Demonstration
Upon launching the GameGuardian application, tapping the Start button initiates its core service. The application operates in one of two modes, depending on the device's privileges:
On a rooted device, GameGuardian leverages superuser permissions to directly attach to the target game's process. This allows it to read and modify memory, states, and variables with high-level privileges.
On a non-rooted device, it relies on a virtual environment. This involves using an app cloning or multi-instancing application (like Parallel Space) to run both GameGuardian and the target game inside a contained sandbox. Within this virtual space, GameGuardian can gain the necessary permissions to hook and modify the game process.
For this demonstration, I have developed a demo game called Cosmic Clicker The core mechanics is straightforward: players tap the planet on screen to generate clicks, which function as the in-game currency.
The game's core loop involves a simple, client-side resource accumulation. This type of repetitive mechanic, where the currency value is managed locally, is highly vulnerable to memory editing. Let's now illustrate this vulnerability by using GameGuardian to manipulate the clicks value directly:
Now that we've successfully isolated the memory address for the clicks variable, we have complete control over its value. For now, I will modify it and set it to 101:
And just like that, our click count is now 101, achieved instantly by bypassing the game's core mechanic of repetitive tapping.
Crucially, this modification is completely invisible to the game's logic. The application now trusts this illegitimate value as authentic, meaning my new high score of 111 will be treated as a valid achievement.
Let's take this a step further and modify the clicks value again to further increase the high score:
Now it would be quite difficult to get us out of the global leaderboard. :P
But what to do if we want to buy something from the store ? The store purchase fails even with our hacked score, revealing that the game's display value is separate from its functional currency:
Let's just modify the value of total available clicks in the store:
Now we have enough to buy the first planet and that too without any hardwork:
And we got what we wanted:
This demonstration highlights the critical vulnerability of client-side authoritative games. A memory editing tool like GameGuardian can easily manipulate local data to provide unfair advantages, and in doing so, it effectively breaks the logic governing in-app purchases, rendering the monetization system obsolete.
Under The Hood
GameGuardian works on the principle of Hooking by ptrace. It modifies the /proc/<pid>/mem virtual file to modify the process's memory as well as open its memory as a file descriptor to read the memory in real-time.
GameGuardian also supports Lua scripting which is used to inject advanced hacks in the target process.
This is the debug library for the lua engine embedded games. It is used to hook these type of games and modify the values as wanted:
This class section provides the feature of persistence of values of the addresses or freezing of values in different instances of the game, so that the attacker doesn't have to apply the same hacks again and again:
This section is responsible for opening the target game's process as the file descriptor so it can be operated on in real-time:
Here GameGuardian is trying to hook the native code library of the target process' application to be able to modify it or load it virtually and read it in real-time:
This is the main hooking section of the tool that modifies the states and the values at various addresses of the target process. It is able to overwrite or do operations on any datatype present in the memory of the game:
This is the class responsible for Speedhack and Time jump features of the GameGuardian tool that either increases the speed of the game or makes it see a modified time value inside the system as the user wants:
How Talsec's RASP Fights Back ?
Developers around the globe pour immense time and resources into crafting balanced and successful games. Unfortunately, tools like GameGuardian can dismantle that balance by providing unfair advantages and breaking the fundamental integrity of the gameplay. This is where in-app protection becomes crucial.
Let's see what happens when a game is fortified with Talsec's RASP SDK (threat callbacks implemented as Toast messages used for a demo):
Talsec's Runtime Application Self-Protection (RASP) actively monitors the device for security risks, using real-time callbacks to inform the application of any detected threats. Its detection capabilities go beyond standard checks for rooting and debuggers to include the identification of app cloning and multi-instancing frameworks. This ensures comprehensive protection across various attack vectors, blocking malicious tools from operating whether they are on a `rooted device` or within a cloned environment on a non-rooted device.
written by Akshit Singh
iOS Keychain vs. Android Keystore
Deep Dive for Mobile Engineers, Architects & Security Professionals
Based on insights shared by the Talsee community, guests from Tide (and a little help of AI).
📌 Overview
Storing sensitive data securely on mobile devices is not optional—it’s a foundational part of secure app design. Whether you're protecting access tokens, private keys, or biometric credentials, both iOS and Android provide secure storage APIs:
iOS Keychain: Apple’s encrypted container for small secrets.
Android Keystore System: Cryptographic framework with hardware-backed protection.
This article compares both in depth, explores their limitations, gives code samples, and explains real-world attack surfaces and defenses.
🧠 Why Secure Storage Matters
Let’s begin with the “why.” Many app developers underestimate threats like token extraction, file tampering, or insecure credential caching. But without secure storage, all other defenses become brittle.
We outline real-world attack scenarios and the necessity of relying on OS-level cryptographic APIs rather than home-grown encryption or local file storage.
🛡️ Threat
🔍 Real-world Example
📉 Without Secure Storage
🔄 Architecture Summary
Now that we understand the stakes, let’s compare the core architecture of both systems.
This chapter maps out how the iOS Keychain and Android Keystore are designed, how they differ in scope, and what developers can rely on when targeting modern (and older) devices.
Feature
iOS Keychain
Android Keystore
📦 Use-Case Matrix with Limitations
Each platform has strong suits and weak spots. To help you design for both Android and iOS, this section introduces a detailed use-case vs capability matrix showing which platform supports what, and under what conditions.
Use Case
iOS Keychain
Android Keystore
Notes
Let’s take a closer look at the four most common use cases from the list above.
🧪 1. Secure Token Storage Example
The most common use case in mobile apps is storing authentication tokens securely — whether it’s a short-lived access token or a long-lived refresh token.
Here we dive into hands-on examples that demonstrate the correct way to store tokens with biometric enforcement, both on iOS and Android.
iOS: Store Auth Token with Face ID Protection
biometryCurrentSet: Invalidate if Face ID/Touch ID enrollment changes.
ThisDeviceOnly: Data won't migrate to other devices or backups.
Android: Encrypt Token with Keystore-Backed AES Key
setUserAuthenticationRequired(true): Tied to biometric/PIN for decryption.
GCM: Provides encryption + integrity via MAC.
🔒 2. Biometric Invalidation Edge Case
While biometric enforcement is a powerful tool, it introduces complexity. What happens when the user adds a new fingerprint or resets Face ID?
We explain how each platform handles changes in biometric configuration — and how to build apps that detect invalidation and gracefully recover when cryptographic material is no longer accessible.
iOS behavior:
Use .biometryCurrentSet to force key/token invalidation if fingerprint/face data changes.
The Keychain item is not deleted, but it becomes permanently inaccessible because the underlying encryption key has been discarded by the Secure Enclave. An attempt to read the item will fail with an authentication error, typically errSecUserCanceled (if the system prompt is dismissed) or errSecAuthFailed, not errSecItemNotFound. The item is still technically present but cannot be decrypted.
Android behavior:
When biometrics change, KeyPermanentlyInvalidatedException is thrown.
The Omission: Key invalidation on biometric change only occurs for keys that were generated with setUserAuthenticationRequired(true). If a key is created in the Android Keystore without this flag, it is not tied to the user's authentication state and will not be invalidated if fingerprints or faces are changed. This is a vital detail for developers deciding which level of security to apply to different keys.
You must catch the exception and regenerate the key.
⚙️ 3. Key Rotation Strategy
Security isn’t static — and neither should your encryption keys be. Whether for compliance or good hygiene, apps should rotate keys regularly or on key events like logout.
This section shows how to build key rotation strategies on both platforms using built-in tools — including setting expiration dates and regenerating keys securely.
Don't reuse the same key indefinitely. Instead:
Rotate on logout / login
Set short validity periods
The iOS Keychain Services API has no built-in mechanism for key expiration equivalent to Android's setKeyValidityEnd. To implement key rotation on iOS, a you must manually store metadata (like a creation timestamp) along with the Keychain item and write application-level logic to check this timestamp and perform the rotation.
Android: Set key expiration
📂 4. Secure File Encryption Pattern
Sometimes you need to protect more than just a 256-bit token. This chapter covers how to encrypt larger content — such as local database files — by using AES keys stored in the Keychain or Keystore.
We introduce a hybrid encryption strategy: store small symmetric keys securely, then use those to encrypt larger payloads.
iOS Concept:
This stores the key inside the regular memory of the application for the brief period making it vulnerable. As a better approach you could use SecKeyCreateRandomKey that makes sure the entire process is done inside the Secure Enclave.
Android Concept:
🔁 Never store encryption keys in plain preferences or files.
Dev & Testing Tools best practices
Implementing secure storage is just the first step. Validating it is where true security lies.
This chapter introduces security testing tools used in both pentesting and automated CI/CD pipelines. Whether you’re red-teaming your own apps or building test automation, these tools will help uncover vulnerabilities in storage logic, fallback behavior, and rooted/jailbroken environments.
Tool
Platform
Use Case
🔍 6. Common Pitfalls & Prevention
Even experienced developers make mistakes: storing secrets in preferences, misconfiguring biometric policies, or assuming parity across devices.
Here we list the most common issues seen in audits and how to proactively address them.
🔥 Pitfall
😱 Impact
✅ Fix
🚧 Security vs Usability
Security always competes with usability — especially in mobile UX. This chapter explores trade-offs like biometric lockouts, token persistence, and device migration.
We explain how to tune secure storage behavior based on your risk model and user expectations.
Tradeoff
Example
Mitigation
🎯 Final Recommendations
We close with a concise checklist that distills everything into a go-to reference for engineers, architects, and product leads.
✅ Use hardware-backed storage when available
✅ Treat tokens and credentials like passwords
✅ Implement key rotation policies
✅ Test with rooted/jailbroken devices
✅ Regularly audit your secure storage logic
✅ Don’t assume biometric == secure without checking hardware
✅ Monitoring cryptographic exceptions is crucial for detecting security attacks, debugging user issues, and maintaining overall application health.
📚 Further Reading & Tools
How to Detect Root using Kotlin
Need to secure your app against rooted devices? Start here.
As a developer facing the challenge of root detection, you’ve landed exactly where you need to be—we’ll break down your options and help you make the right choice. Written by experts who’ve built and battled this themselves 😎.
What is rooting?
Rooting is the process of gaining privileged (root or superuser) access to an Android device. Rooting bypasses the application sandbox model, allowing users—and attackers—to access and modify system-level files and settings.
freeRASP for Kotlin Multiplaform Guide
A Kotlin Multiplatform (KMP) variant that lets you add runtime app protection to your shared Kotlin code.
Today, freeRASP is getting a new family member: a Kotlin Multiplatform (KMP) variant that lets you add runtime app protection to your shared Kotlin code and ship secure apps to both Android and iOS from a single codebase. Teams using KMP can now reuse the same security logic alongside business logic, without duplicating integrations or maintaining separate SDK wiring per platform.
TL;DR: Edit 6 project files plus add 2 framework folders; jump to full integration guide now:
npm install react-native-fs
import RNFS from 'react-native-fs';
const detectJailbreakDIY = async () => {
// A list of common files found on Jailbroken iOS devices
const jailbreakPaths = [
'/Applications/Cydia.app',
'/Applications/RockApp.app',
'/Applications/Icy.app',
'/usr/sbin/sshd',
'/usr/bin/sshd',
'/usr/libexec/sftp-server',
'/Applications/WinterBoard.app',
'/Applications/SBSettings.app',
'/private/var/lib/apt/',
'/Library/MobileSubstrate/MobileSubstrate.dylib',
'/bin/bash',
];
for (const path of jailbreakPaths) {
try {
const exists = await RNFS.exists(path);
if (exists) {
console.warn(`Jailbreak artifact found: ${path}`);
return true;
}
} catch (error) {
// Access errors might happen due to permissions, ignore them
}
}
// Additional Check: Can we write to a system folder? (Sandbox Escape)
try {
const testPath = '/private/jailbreak_test.txt';
await RNFS.writeFile(testPath, 'test', 'utf8');
await RNFS.unlink(testPath); // Clean up
console.warn("Sandbox escape detected! (Write access to /private)");
return true;
} catch (e) {
// Failure to write is good (Normal behavior)
}
return false;
};
Think of rooting as “administrator access” on a Linux-based OS (which Android is). Common rooting tools include Magisk, SuperSU, Shamiko, KingoRoot and much more.
And how common is root access? 0.03% of devices are rooted — a significant number that could pose security risks
While rooting can enable customizations (e.g. removing bloatware, customizing ROMs, running system-level scripts), with that power comes a huge attack surface. It introduces security vulnerabilities—like the ability to hook and inject the code using tools like Frida or Xposed.
How to detect rooted device?
Detecting root on Android is complex and constantly evolving, especially with tools like Magisk. While building your own solution offers control, it’s not recommended due to the time, effort, and expertise required to keep up. Instead, using third-party libraries like freeRASP or RootBeer provides a reliable and up-to-date solution maintained by experts.
DIY Coding Guide
You can implement yourself simple root detection like this:
Popular Libraries: freeRASP, RootBeer, Play Integrity
Let's compare the most popular options. It's immediately clear why freeRASP is so popular—with a staggering 6,000+ apps using it as of July 2025.
👑 freeRASP (free library by Talsec)
Very strong root detector — detects Magisk 29 and Shamiko
Comes with 14 extra detections like app integrity, Frida and hooking, emulators, debugging, screenshots, etc.
Used by 6000+ apps; #1 Mobile RASP SDK by popularity ()
Integration guide:
🍺 RootBeer (open-source library by Scott Alexander-Bown)
Open-source root detection tool.
Fully offline checks (no internet dependency).
Lacks detection of the latest techniques.
Used by 5000+ apps
Integration guide:
📡 Play Integrity (library by Google)
Offers strong and officially supported integrity checks.
Requires Google Play Services and backend integration.
Dependent on internet connectivity.
Integration guide:
Comparison Table
Capability
freeRASP
RootBeer
Play Integrity API
Root Detection Accuracy
High
Medium
❌ Indirect (via signals)
Trusted by
6000+ apps
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
You can find detailed description about root and jailbreak detection in our glossary and articles:
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
freeRASP for Kotlin Multiplatform
freeRASP itself is a lightweight mobile security library designed to detect common runtime threats such as rooting, jailbreaking, repackaging, reverse engineering, and emulator abuse. It connects to the Talsec Portal, providing real-time analytics and detailed security reports on detected risks. With the freeRASP for KMP variant, these protections are seamlessly integrated directly into the shared Kotlin module via a unified common API. This API intelligently abstracts over native Talsec components to execute platform-appropriate, low-level security checks.
The KMP variant is specifically designed for typical mobile KMP setups that target both Android and iOS. It utilizes the standard hierarchical source set structure (commonMain, androidMain, iosMain). This approach ensures that security logic is co-located efficiently with shared business logic while still allowing for necessary platform-specific configuration and customization when required.
Capabilities available in freeRASP for KMP
The KMP variant brings the same core detection categories that developers know from existing freeRASP integrations:
Rooted or jailbroken devices, including popular tools such as Magisk and Dopamine
Reverse engineering attempts and runtime hooking frameworks (for example, Frida or Xposed)
Tampering or repackaging and installation from untrusted sources
Emulators, app clones, multi‑instancing, screenshots, and screen recording attempts
You can react to these events through callbacks and monitor them later in the Talsec Portal. The solution is designed to have minimal impact on app performance and to support OWASP MASVS RESILIENCE requirements out of the box.
Implement callback handlers to react to detection events, for example by logging, displaying warnings, or triggering additional server‑side checks, using the common API exposed by the KMP library. Because callbacks live in shared code, behavior stays consistent across platforms.
Initialize freeRASP on startup
From your platform‑specific entry points (for example, Android Application class and iOS App delegate or equivalent), call into shared code to initialize freeRASP with the configuration. This ensures the SDK starts early enough to observe the full app lifecycle.
Ideal use cases
The KMP variant fits several common scenarios especially well:
Greenfield KMP apps that want to ship secure Android and iOS builds from day one.
Existing Android apps adopting a shared KMP module and planning to add an iOS client later
Teams that already use freeRASP on one platform and want to consolidate security into shared Kotlin code
By keeping security logic close to shared domain logic, teams can better enforce consistent policies across platforms and simplify maintenance.
Roadmap and compatibility
freeRASP has a long-standing track record of protecting production apps on both Android and iOS, including through many OS, device, and ecosystem changes. The new Kotlin Multiplatform variant builds on this foundation, so teams can expect the same stability and compatibility when sharing security logic across platforms.
The SDK evolves continuously to keep pace with fresh reverse-engineering tactics, new jailbreak and root approaches, and other emerging attack techniques, with updates captured in regular releases. The KMP library follows the same versioning and publishing conventions as the existing freeRASP SDKs, which keeps dependency management and CI/CD workflows predictable for engineering teams.
For a concrete view of how the product evolves over time, including new detections, improvements, and fixes, you can review the full history in the freeRASP “What’s New and Changelog” page at https://docs.talsec.app/freerasp/whats-new-and-changelog.
Get started
To start using freeRASP for Kotlin Multiplatform today:
Explore the Talsec Portal to see how detected threats appear in dashboards, reports, and benchmarks
Share feedback or issues via Github Issues
Happy coding, Talsec Team 💙
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Plans Comparison
Premium Products:
- An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
(Android & iOS) & - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
- Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
How to Detect a Weak Wi-Fi: Guide to In-App Network Security Checks
In today's interconnected world, the security of user data is paramount. For mobile applications, this responsibility extends beyond the app itself to the environment it operates in—including the Wi-Fi network the device is connected to. A connection to an access point (AP) employing outdated or compromised security protocols can expose users to significant risks, such as data interception and man-in-the-middle attacks.
This article details a pragmatic, "one-size-fits-all" approach for Android developers to identify weakly secured Wi-Fi connections. We aim to balance robust detection with a user experience that avoids unnecessary alarm for networks that, while perhaps not bleeding-edge, are still adequately secure for general use.
import Security
// The token to be stored. In a real app, this wouldn't be hardcoded.
let token = "sensitive_token"
// 1. Create an access control object that requires the current set of biometrics.
// This policy is enforced when you later try to *read* the item.
let access = SecAccessControlCreateWithFlags(
nil,
kSecAttrAccessibleWhenUnlockedThisDeviceOnly,
.biometryCurrentSet, // The key is invalidated if biometrics change.
nil
)!
// 2. Define the attributes for the new keychain item.
// An LAContext is NOT needed to add an item, only to retrieve it.
let attributes: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: "com.example.app",
kSecAttrAccount as String: "authToken",
kSecValueData as String: token.data(using: .utf8)!,
kSecAttrAccessControl as String: access // Apply the access control policy.
]
// To ensure this code can be re-run, delete any existing item first.
SecItemDelete(attributes as CFDictionary)
// 3. Add the item to the Keychain.
let status = SecItemAdd(attributes as CFDictionary, nil)
if status == errSecSuccess {
print("✅ Token stored successfully. Future access will require biometrics.")
} else {
print("❌ Error storing token: \(status)")
}
// auth_token_key - The token to be stored. In a real app, this wouldn't be hardcoded.
val keyGenSpec = KeyGenParameterSpec.Builder(
"auth_token_key",
KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT
).apply {
setBlockModes(KeyProperties.BLOCK_MODE_GCM)
setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
setUserAuthenticationRequired(true)
setUserAuthenticationValidityDurationSeconds(60)
}.build()
val keyGenerator = KeyGenerator.getInstance("AES", "AndroidKeyStore")
keyGenerator.init(keyGenSpec)
val secretKey = keyGenerator.generateKey()
// Use AES encryption with this key
val cipher = Cipher.getInstance("AES/GCM/NoPadding")
cipher.init(Cipher.ENCRYPT_MODE, secretKey)
val iv = cipher.iv
val encrypted = cipher.doFinal("token_value".toByteArray())
let aesKey = generateAESKey()
storeInKeychain(key: aesKey)
encryptFile(data: fileData, with: aesKey)
val aesKey = getKeyFromKeystore("file_key")
val cipher = Cipher.getInstance("AES/GCM/NoPadding")
cipher.init(Cipher.ENCRYPT_MODE, aesKey)
val encryptedFile = cipher.doFinal(fileData)
import android.os.Build
import java.io.File
object RootUtil {
val isDeviceRooted: Boolean
get() = checkBuildTags() || checkSuPaths()
private fun checkBuildTags(): Boolean {
val buildTags = Build.TAGS
return buildTags != null && buildTags.contains("test-keys")
}
private fun checkSuPaths(): Boolean {
val paths = arrayOf(
"/system/app/Superuser.apk",
"/sbin/su",
"/system/bin/su",
"/system/xbin/su",
"/data/local/xbin/su",
"/data/local/bin/su",
"/system/sd/xbin/su",
"/system/bin/failsafe/su",
"/data/local/su",
"/su/bin/su"
)
for (path in paths) {
if (File(path).exists()) {
return true
}
}
return false
}
}
// Start detection (asynchronously)
Talsec.start(...)
override fun onRootDetected() {
Log.w("freeRASP", "Device is rooted!")
// Take action if needed
}
// Perform detections (blocking)
val rootBeer = RootBeer(...)
if (rootBeer.isRooted) {
Log.w("RootBeer", "Device is rooted!")
// Take action if needed
}
The "One-Size-Fits-All" Philosophy: Defining "Weak"
The core challenge lies in creating a consistent definition of a "weak" Wi-Fi network that can be applied across different Android versions, which offer varying APIs for network inspection. Our "one-size-fits-all" rule focuses on flagging protocols that are unambiguously compromised or offer no real protection.
The Blacklist: Clearly Insecure Protocols
To implement this, we establish a blacklist of security configurations that should trigger a warning:
Open (Traditional, Unencrypted): Networks with no password and no encryption. On Android S (API 31) and above, this corresponds to WifiInfo.SECURITY_TYPE_OPEN. For older versions, this is inferred by the absence of WPA/WPA2/WPA3/OWE markers in the AP's advertised capabilities.
WEP (Wired Equivalent Privacy): A notoriously broken and deprecated protocol. Represented by WifiInfo.SECURITY_TYPE_WEP on Android S+ and the presence of "WEP" in the capabilities string on older versions.
WPA1 (Wi-Fi Protected Access - PSK/TKIP/CCMP): While an improvement over WEP, WPA1 has known vulnerabilities (especially TKIP) and is significantly weaker than WPA2. This is primarily identified in the legacy path (pre-Android S) by finding "WPA" in the capabilities string without stronger WPA2 or WPA3 indicators. The modern WifiInfo.SECURITY_TYPE_PSK often groups WPA1 and WPA2, making specific WPA1 flagging on S+ tricky without more granular (and often unavailable) system information. For a "not too alarming" approach, if SECURITY_TYPE_PSK is encountered, we generally don't flag it as weak to avoid flagging robust WPA2-PSK networks.
Unknown Security: If the Android system cannot determine the security type of the connected network (WifiInfo.SECURITY_TYPE_UNKNOWN on S+), it's prudent to consider this a potential risk. In the legacy path, this is approximated by encountering very minimal or unparseable capability strings that don't indicate any known security protocol.
What We Don't Blacklist (To Avoid Over-Alarming):
OWE (Opportunistic Wireless Encryption / Wi-Fi Enhanced Open): While it doesn't use a pre-shared key, OWE does provide encryption for open networks, significantly improving privacy over traditional open Wi-Fi. Flagging this could cause undue concern. Identified as WifiInfo.SECURITY_TYPE_OWE on S+ and by an "OWE" marker (or absence of "OPEN" markers if "OWE" is present) in legacy capabilities.
WPA2-PSK (with AES/CCMP): The long-standing secure standard for personal networks. Covered by WifiInfo.SECURITY_TYPE_PSK on S+ and various "WPA2" or "RSN" markers in legacy capabilities.
WPA3 (SAE/Enterprise): The current leading security standard. Represented by WifiInfo.SECURITY_TYPE_SAE (and enterprise variants) on S+ and "WPA3" or "SAE" markers in legacy capabilities.
Practical Implementation for Developers
A robust check involves these key steps:
Permissions: Ensure your app has ACCESS_WIFI_STATE and ACCESS_FINE_LOCATION permissions. Location is crucial for accessing Wi-Fi scan results (pre-S) and detailed connection information (SSID on Q+).
Connectivity Check: First, verify the device is actively connected to a Wi-Fi network. Use ConnectivityManager to check for an active network and ensure its transport type is NetworkCapabilities.TRANSPORT_WIFI. Also, verify WifiInfo provides valid details (e.g., networkId != -1, non-null BSSID/SSID). If not connected to Wi-Fi, the security check is moot.
API Version Branching (SDK_INT):
Android S (API 31) and above: Utilize WifiInfo.currentSecurityType. Compare this integer value against our defined blacklist constants (SECURITY_TYPE_OPEN, SECURITY_TYPE_WEP, SECURITY_TYPE_UNKNOWN).
Pre-Android S: This path is more heuristic.
Retrieve the WifiInfo for the connected network to get its SSID.
Perform a Wi-Fi scan using WifiManager.scanResults.
Find the ScanResult that matches the connected SSID.
Parse the capabilities string of this ScanResult. Look for the presence of "WEP", or the absence of "WPA"/"RSN"/"OWE" (indicating traditional Open), or the presence of "WPA" without stronger WPA2/WPA3 indicators (indicating WPA1).
Incident Reporting/User Notification: If a blacklisted protocol is detected, inform the user or log the incident appropriately.
Conclusion
By implementing such a Wi-Fi security check, developers can empower their applications to identify potentially hazardous network environments. This "one-size-fits-all" approach, focusing on unambiguously weak protocols, provides a valuable layer of situational awareness for the user without causing unnecessary alarm. It’s a pragmatic step towards fostering a more secure mobile experience, acknowledging the diverse capabilities of the Android ecosystem. Remember that while this check identifies AP capabilities, the actual security also depends on factors like strong passwords and AP firmware integrity, which are beyond an app's direct control but contribute to the overall risk assessment this feature helps initiate.
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
5000+ apps
N/A
Works Offline
✅
✅
❌ (requires Google Play + Backend)
Detection Response
Listener-based
Manual check
Backend-dependent, server-based validation
Covers Magisk/Hidden Root
✅
❌
❌ (Indirect)
Easy Integration
✅
✅
Moderate (needs server)
Additional Threats Detected
Emulator, Tamper, Debug, Install Source
Root only
Account
Community & Support
Active
Declining
Classic Google — no support whatsoever.
Integration
In-app SDK
In-app SDK
In-app SDK + Backend-dependent, Google-only
Glossary: Root Detection
Glossary: Jailbreak Detection
Simple Root Detection: Implementation and Verification
Dynamic TLS Pinning - Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
Secret Vault - A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Fake users, fraudsters, and reverse engineers love emulators. Here’s how to stop them.
Emulators are powerful tools for developers, but in the wrong hands they become a major security risk. Fraudsters use them to automate fake traffic, bypass device checks, and even reverse-engineer apps. Luckily, you can detect emulator environments in Kotlin and block them before damage is done.
What is Emulator?
An emulator is a software-based environment that mimics a real Android device. While legitimate for app testing, attackers exploit emulators for:
Click fraud – simulating thousands of devices to inflate ad revenue.
Credential stuffing – running automated scripts against login systems.
App tampering – analyzing and modifying your app in a safe, sandboxed space.
Emulators pretends to be a real device, but it’s just software running on a PC. Detecting it crucial for protecting your app’s integrity.
More about emulators and their usage in gaming:
How to Detect Emulator Usage?
Detecting emulator environments is tricky because attackers constantly adapt.
Basic checks for emulators include looking at:
Device HW statistics: device model, CPU info, screen size/resolution
Suspicious system files
Suspicious processes
However, these are not enough thought, since they are easy to bypass. Instead, rely on specialised, continuously updated SDKs.
These can provide:
Newly updated detection techniques and emulator detectors
Deeper check of device
Nice API for end developer to interact with, rather than reinventing a wheel
DIY Coding Guide
You can implement yourself like this:
Use freeRASP (free library by Talsec)
With freeRASP, the emulator detection utilizes hundreds of advanced checks, offering robust detection even with bypass scripts applied.
Strong emulator detections
Actively maintained ()
Comes with like app integrity, Frida and hooking, root/jailbreak detection, debugging, screenshots, etc.
Integration Example
Add the in your project, focus on implementing the following callback:
Commercial Alternatives
When evaluating mobile app security and Runtime Application Self-Protection (RASP), developers often compare various Talsec alternatives to find the right fit for their architecture. The "right choice" depends on the specific problem you need to tackle and which vendor offers the best bang for your buck.
The market is diverse, offering different philosophical approaches to protection. Talsec prioritizes top-tier root detection and a balanced security SDK portfolio covering the most popular attack vectors. Meanwhile, some vendors specialize primarily in heavy code obfuscation and compiler-based hardening, while others focus on a drag-and-drop (no-code) integration experience for DevOps-oriented teams. There are also solutions dedicated specifically to API security, active cloud hardening, enterprise compliance, or gaming protection. The most prominent providers alongside Talsec include Guardsquare, Appdome, Promon, Build38, Approov, and AppSealing.
Handle App Security with a Single Solution! Check Out Talsec's Premium Offer & Plan Comparison!
Apps Security Threats Report 2025
Used by 6000+ apps; #1 Mobile RASP SDK by popularity (link)
RASP+ - An advanced security SDK that actively shields your app from reverse engineering, tampering, rooting/jailbreaking, and runtime attacks like hooking or debugging.
AppiCrypt (Android & iOS) & AppiCrypt for Web - A backend defense system that verifies the integrity of the calling app and device to block bots, scripts, and unauthorized clients from accessing your API.
Malware Detection - Scans the user's device for known malicious packages, suspicious "clones," and risky permissions to prevent fraud and data theft.
- Prevents Man-in-the-Middle (MitM) attacks by validating server certificates that can be updated remotely without needing to publish a new app version.
- A secure storage solution that encrypts and obfuscates sensitive data (like API keys or tokens) to prevent them from being extracted during reverse engineering.
Simple Root Detection: Implementation and verification
Introduction
In the area of mobile application security, the development of new techniques is a constant game of cat and mouse. Developers work tirelessly to implement new methods of identifying and mitigating the risks associated with rooted devices, while attackers continually develop more sophisticated tools to bypass these safeguards. In this article, we will explore what rooting is, outline basic root detection techniques, and show you how to effectively test your implementation.
Talsec.start(applicationContext)
override fun onEmulatorDetected() {
Log.w("freeRASP", "Emulator detected!")
// Optionally block sensitive actions or warn the user
}
Check out freeRASP and RASP+ for industry leading root detection
Rooting: What It Is And How It Affects Mobile Security
Rooting is the process of gaining privileged control over an Android device, allowing users to bypass system restrictions and obtain full administrative privileges. This process enables the installation of specialized apps, modification of system settings, or running commands that are otherwise restricted in the default user environment.
On the one hand, these capabilities enhance control over the device, providing users with greater flexibility and customization options. On the other hand, rooting introduces significant security risks, as it can expose the system to unauthorized access, malicious activity, or potential vulnerabilities.
In the context of root detection testing, rooting presents a serious challenge as it disables or bypasses many of the operating system’s built-in mechanisms. Therefore, detecting signs of rooting is crucial for maintaining device integrity and preventing vulnerabilities that could be exploited by attackers.
Shut the Mouse Hole: Stop Attackers with Root Detection
As outlined in the previous section, Android devices with root access are significantly more exposed to mobile malware, privilege escalation exploits, and persistent system compromise due to the lack of enforced security boundaries. These rooted environments often become attractive entry points for attackers, allowing them to gain unauthorized access not only to sensitive user data but also for other components of the mobile ecosystem.
Given these significant risks, it is essential to implement reliable root access detection that can effectively identify compromised devices before serious security breaches occur. Root detection is the first line of defense against these threats and should be an integral part of the security architecture of any application.
However, root detection is not a simple task. As rooting techniques evolve, so do the methods for bypassing detection. Root bypass techniques have become increasingly sophisticated, making it necessary to implement multi-layered root detection mechanisms. Relying on a single method of detection is not sufficient, as attackers often find ways to circumvent basic checks.
There are several common techniques used to identify rooted devices, each targeting a specific characteristic or modification that typically occurs when a device is compromised. By combining these techniques, it is possible to create a more robust method for detecting rooted devices, helping to reduce the security risks posed by compromised devices.
File-based detection
Rooting often leaves behind characteristic traces in the file system, and by probing for these artifacts, it’s possible to identify compromised devices. These artifacts may include binaries, configuration files, or directories commonly associated with rooting tools.
One of the most recognizable indicators is the presence of the su binary, which is typically used to grant superuser privileges to applications. This binary may be located in several paths depending on the rooting method or tool used, such as:
/system/bin/su
/system/xbin/su
/sbin/su
/data/local/su
/data/local/xbin/su
Some devices may also contain utility binaries like busybox, which provides a suite of Unix tools often included in rooted environments:
/system/xbin/busybox
/system/bin/busybox
Root management apps, such as SuperSU or Magisk, may install APKs and daemon scripts to maintain root access. These files can also be detected at known locations:
/system/app/Superuser.apk
/system/etc/init.d/99SuperSUDaemon
/system/xbin/daemonsu
/dev/com.koushikdutta.superuser.daemon/
Implementing a scan for these files can serve as a basic yet effective first layer of root detection.
Process-based detection
Some root detection techniques focus on monitoring and interacting with processes that are commonly associated with root access. Root management tools typically rely on binaries like su to elevate privileges, or sh to execute scripts with root permissions. By attempting to invoke these processes at runtime, it’s possible to detect their presence and functionality, even if their files are hidden or obfuscated.
Package-based detection
Rooted devices often rely on specialized apps to manage superuser access and maintain elevated privileges. These root management tools, such as Magisk, SuperSU, or older solutions like Superuser.
Android provides APIs to query all installed packages using the PackageManager. By comparing package names against a known list of popular root-related apps, it’s possible to detect the presence of these tools on the device.
Some commonly used package names include:
eu.chainfire.supersu
com.noshufou.android.su
com.koushikdutta.superuser
com.zachspong.temprootremovejb
com.ramdroid.appquarantine
com.topjohnwu.magisk
System property-based detection
Custom or tampered Android builds often leave identifiable traces in system properties. These builds may originate from user-modified ROMs or developer-compiled firmware images and frequently bypass standard security mechanisms. Detecting such modifications can help determine if a device is running a non-standard and potentially insecure, operating system.
One common technique involves inspecting the Build.TAGS property. Another indicator is the absence of Google’s Over-The-Air (OTA) update certificates.
Static Analysis for Root Traces
One approach to determine whether an application includes root detection mechanisms is to perform a static analysis. This technique enables the examination of the application’s code without requiring execution. It also helps identify embedded root detection logic, such as techniques outlined in the section Common Root Detection Techniques.
This method is particularly useful during security assessments, where it’s important to understand how thoroughly an application protects itself against rooted environments. The following steps outline how to perform static analysis to detect common root detection implementations:
Check for root detection indicators
Apps may check for the presence of files commonly associated with rooted devices (e.g., /system/xbin/su, /data/data/com.superuser.android.id) or for root management apps (e.g., SuperSU, Magisk).
Run a static analysis tool such as MobSF or Apktool on the app binary to look for common root detection checks.
Non-standard system behavior detection
Check if the app monitors processes that shouldn't normally be running, such as su or sh, which are typically associated with root management tools.
Reviewing the app's smali or assembler code can reveal whether the app checks for or interacts with such processes.
System properties modification detection
Apps may monitor system properties (e.g., ro.debuggable, ro.secure) for changes, adding another layer to the root detection process.
Critical system directories modification detection
Check if the app attempts to modify files or settings in critical system directories, such as /data or /system, which should remain immutable on unrooted devices.
As a result of this analysis, one should be able to observe whether the application contains any of these typical root detection patterns. If such mechanisms are present and clearly target known indicators of root access, it can be concluded that the app implements root detection properly.
It’s important to note that static analysis has limitations and may not reveal all root detection logic, especially if it is obfuscated or implemented using unconventional techniques.
Dynamic Detection of Root Access
Another approach to determine whether an application includes root detection mechanisms is to perform dynamic analysis. This technique involves observing the application’s behaviour at runtime while it operates on a potentially rooted device. It allows testers to understand how the application interacts with the system and whether it performs any real-time checks for root-related indicators.
This method is particularly useful during runtime security assessments, where the goal is to verify if the application can detect and respond to signs of root access under realistic conditions. The following steps outline how to perform dynamic analysis to detect runtime root detection behaviour:
Monitor Application Behaviour
Use tools like strace or similar utilities to trace how the app checks for root access. Look for interactions with the system, such as attempts to open su, check running processes, or read root-specific files. This analysis helps uncover how the app performs root detection and may reveal potential weaknesses.
Bypassing Root Detection Mechanisms
Run a dynamic analysis tool such as Objection to attempt automated root detection bypass. Use commands to manipulate root checks and observe whether the app still correctly detects root access or if its security mechanisms can be bypassed.
As a result of this analysis, one should be able to determine whether the application performs root detection at runtime and how resilient it is to bypass techniques. If the application actively checks for root indicators and responds appropriately, even under attempts to tamper with its logic, it can be considered to have a well-implemented runtime root detection mechanism. However, if no signs of root detection are observed, or if the application’s checks are easily bypassed, it suggests that the implementation may be incomplete or ineffective.
A Hands-On Demo
This section demonstrates how root detection logic present in an Android application can be verified via static analysis using Semgrep. The test is performed on a class containing basic root detection techniques.
In real-world scenarios, security engineers often work with Android applications where the original source code is not available. However, static analysis can still be performed by reconstructing the code using reverse engineering tools like jadx or apktool. These tools allow analysts to obtain Java or Smali code from an APK file.
For the purposes of this hands-on example, we’ll assume the source code is already available (or has been successfully reconstructed) and focus on the static analysis part.
Let’s take a look at a simple class that contains basic root detection logic:
Before we begin, make sure you have Semgrep installed. You can install it using:
pip install semgrep
or , if you’re using macOS:
brew install semgrep
If you’re working with code reconstructed from APK (e.g., RootDetection_reversed.java obtained via jadx), you can run Semgrep on the reversed Java file instead of the original Kotlin source.
Here’s an example of a Semgrep rule that identifies common patterns used in root checks:
When executed, Semgrep will provide results similar to this:
Conclusion
To conclude, implementing effective root detection techniques is crucial for maintaining mobile security and protecting users from potential threats posed by compromised devices. As rooting methods evolve, developers must employ multi-layered detection strategies to stay ahead of attackers who seek to bypass security measures. Combining file-based, process-based, package-based, system property-based, and dynamic detection techniques can create a robust defense against rooting-related risks.
For industry-leading root detection, explore freeRASP and RASP+, which provide advanced security features to safeguard mobile applications from threats.
For additional terminology and security insights about rooting, jailbreaking and hooking, visit the Talsec glossary page.
OWASP Top 10 For Flutter – M2: Inadequate Supply Chain Security in Flutter
In the first installment of this series, , we explored the pitfalls of storing and handling credentials in your Flutter apps. That conversation underscored how a single compromised credential can jeopardize user data and brand trust.
Now, let’s turn our focus to M2: Inadequate Supply Chain Security—an equally pressing issue in modern mobile development. Safeguarding your Flutter supply chain is critical, as malicious actors continuously seek footholds through third-party dependencies, SDKs, pipelines, and distribution channels.
[RootDetection.kt]
class RootDetection(private val context: Context) {
companion object {
private const val TAG = "RootCheck"
}
fun mastgTest(): String {
return when {
checkRootFiles() || checkSuperUserApk() || checkSuCommand() || checkDangerousProperties() -> {
"Device is rooted"
}
else -> {
"Device is not rooted"
}
}
}
private fun checkRootFiles(): Boolean {
val rootPaths = setOf(
"/system/app/Superuser.apk",
"/system/xbin/su",
"/system/bin/su",
"/sbin/su",
"/system/sd/xbin/su",
"/system/bin/.ext/.su",
"/system/usr/we-need-root/su-backup",
"/system/xbin/mu"
)
rootPaths.forEach { path ->
if (File(path).exists()) {
Log.d(TAG, "Found root file: $path")
}
}
return rootPaths.any { path -> File(path).exists() }
}
private fun checkSuperUserApk(): Boolean {
val superUserApk = File("/system/app/Superuser.apk")
val exists = superUserApk.exists()
if (exists) {
Log.d(TAG, "Found Superuser.apk")
}
return exists
}
private fun checkSuCommand(): Boolean {
return try {
val process = Runtime.getRuntime().exec(arrayOf("which", "su"))
val reader = BufferedReader(InputStreamReader(process.inputStream))
val result = reader.readLine()
if (result != null) {
Log.d(TAG, "su command found at: $result")
true
} else {
Log.d(TAG, "su command not found")
false
}
} catch (e: IOException) {
Log.e(TAG, "Error checking su command: ${e.message}", e)
false
}
}
private fun checkDangerousProperties(): Boolean {
val dangerousProps = arrayOf("ro.debuggable", "ro.secure", "ro.build.tags")
dangerousProps.forEach { prop ->
val value = getSystemProperty(prop)
if (value != null) {
Log.d(TAG, "Dangerous property $prop: $value")
if (value.contains("debug")) {
return true
}
}
}
return false
}
private fun getSystemProperty(prop: String): String? {
return try {
val process = Runtime.getRuntime().exec(arrayOf("getprop", prop))
val reader = BufferedReader(InputStreamReader(process.inputStream))
reader.readLine()
} catch (e: IOException) {
Log.e(TAG, "Error checking system property $prop: ${e.message}", e)
null
}
}
}
┌────────────────┐
│ 1 Code Finding │
└────────────────┘
RootDetection_reversed.java
❱ rules.root-detection
Root detection mechanisms have been identified in this application.
65┆ Process process = Runtime.getRuntime().exec(new String[]{"which", "su"});
Martin Žigrai - OWASP MAS contributor, Talsec Mobile Security Engineer
Introduction to Supply Chain Security in Flutter
When we talk about supply chain security in mobile app development, we mean protecting every component that goes into your app—from third-party libraries and SDKs to build tools and distribution channels. Modern apps depend on these external components to deliver features quickly. Yet, each component also introduces risk since attackers can target them as a “weak link.”Let me remind you with some real-world incidents:
XcodeGhost: A compromised version of Apple’s Xcode tool injected malware into iOS apps, leading to large-scale tampering in the App Store.
Mintegral (SourMint) SDK: A widely used advertising SDK secretly committed ad fraud and spied on user clicks, impacting 1,200+ iOS apps.
These examples highlight that attackers can still infiltrate your app via third-party components even if you don't write malicious code yourself. Flutter developers need to learn from these incidents, especially as the Flutter ecosystem grows rapidly on pub.dev.
Understanding Flutter’s Supply Chain Risks
To effectively secure your Flutter application, you must clearly understand where and how your supply chain can be compromised. Below is a diagram illustrating the journey from source code to end users:
Let’s delve deeper into each risk area.
1. Dependency Management Risks
Flutter apps rely heavily on packages hosted on pub.dev. While this ecosystem boosts productivity, it can also introduce vulnerabilities:
Malicious or Compromised Packages: Attackers may create Trojan packages disguised as legitimate ones or compromise widely-used packages to inject malicious code. For example, imagine you include a popular HTTP client package (fake_http) to simplify networking in your app:
If an attacker infiltrates this package on pub.dev, the malicious code can silently intercept user data:
o mitigate this, always review package changes, prefer verified publishers, and monitor for suspicious behaviors.
Outdated Dependencies: Old package versions may contain known vulnerabilities, potentially exposing your app to exploits. Regularly audit and update your dependencies to protect against such risks.
Dependency Confusion: If your build environment isn’t strictly configured, Flutter might pull packages from unintended sources, resulting in compromised or malicious code integration. For example, your internal package named internal_logging could inadvertently pull a malicious external version if pub.dev is prioritized:
Misconfiguration or missing the private registry could fetch the package from the public source, creating a confusion attack scenario. Configure repositories and enforce private hosting policies.
2. Third-Party SDK Risks
Third-party SDKs are often integrated directly into Flutter via plugins. Each SDK you use is a potential risk vector:
Obfuscated or Closed-Source SDKs: Closed-source or obfuscated SDKs (common with advertising or analytics plugins) may conceal malicious logic. As a real-world example, Mintegral SDK in 2020 secretly committed ad fraud and collected user data.
Compromised APIs: Attackers can leverage vulnerabilities in third-party backend APIs or services your Flutter app interacts with (Firebase, cloud configurations, etc.). For example, if your app fetches remote config via an insecure API endpoint:
attackers could intercept and inject malicious configurations. Consistently enforce HTTPS, validate responses, and regularly audit third-party APIs.
Unvetted Native Binaries: Flutter plugins that integrate native code via Gradle (Android) or CocoaPods (iOS) carry additional supply chain risks, as native binaries can be tampered with. Always prefer plugins with transparent source codes and verified binaries.
3. Build Pipeline Vulnerabilities
Your CI/CD pipeline itself is vulnerable and can be exploited:
CI/CD Pipeline Access: Attackers gaining access to your pipeline could insert malicious build steps. For example, a compromised GitHub Action could introduce harmful commands:
Securing pipeline access through least privilege, MFA, and audit logging is essential. Always make sure when you are using Curl and .sh files, make sure you understand what you are doing.
Exposed Signing Keys If Android (.jks) or iOS (.p12) signing keys are compromised, attackers could publish malicious app updates. Secure keys using encrypted storage (e.g., GitHub Secrets, AWS KMS) and regularly rotate keys to mitigate potential damage.
Artifact Tampering: Without verifying final binaries (.apk, .aab, .ipa), an attacker could replace artifacts. As a best practice, generate and store cryptographic hashes post-build:
Compare these hashes before deploying or distributing to ensure artifact integrity.
Securing Dependencies in Flutter
Managing Flutter dependencies securely is the bedrock of supply chain protection. Here are the essentials:
Let's explore what is possible for vetting packages.
Choosing Trusted Packages: Before including a dependency, thoroughly assess its reliability and security:
Prefer packages with active maintainers, regular updates, and transparent changelogs.
Look for a high popularity score and verified publishers (indicated by a "verified publisher" badge on pub.dev).
Regularly review the package's repository for suspicious or unusual activities.
Limiting Dependencies: Every additional package increases your risk exposure. Evaluate each dependency critically:
Assess if the functionality is critical or can be efficiently implemented internally.
Prefer fewer, well-vetted dependencies over numerous less secure packages.
For example, if you only need a small portion of a large UI toolkit, consider implementing the required component yourself, reducing potential vulnerabilities:
Comprehensive Dependency Assessment Checklist:
Here is a comprehensive checklist that I recommend you:
[ ] Verified publisher
[ ] Frequent updates (recent commits within the last three months)
[ ] Active community and responsive maintainers
[ ] Comprehensive documentation
[ ] No unresolved critical vulnerabilities
[ ] Clear license terms (MIT, Apache, BSD, etc.)
[ ] Minimal and justified permissions required
[ ] Well-tested with good code coverage
2. Maintaining and Enforcing the Lockfile (pubspec.lock)
I want to start by mentioning what the Dart team recommends from the official website:
The pubspec.lock file is a special case, similar to Ruby's Gemfile.lock.
For regular packages, don't commit the pubspec.lock file. Regenerating the pubspec.lock file lets you test your package against the latest compatible versions of its dependencies.
For application packages, we recommend that you commit the pubspec.lock file. Versioning the pubspec.lock file ensures changes to transitive dependencies are explicit. Each time the dependencies change due to dart pub upgrade or a change in pubspec.yaml the difference will be apparent in the lock file.
Now that you understand, let's review what you must do for the lock file.
Commit the Lockfile: Always commit your pubspec.lock file to your version control system. It records exact dependency versions and their cryptographic hashes, ensuring consistency across builds:
Dependency Hash Checking: Flutter automatically verifies dependency hashes in pubspec.lock whenever dependencies are fetched:
Suppose the package content changes unexpectedly on the , Flutter identifies the discrepancy, protecting you from dependency tampering.
A warning or error appears, alerting you to investigate the issue.
Enforcing Lockfile Integrity in CI/CD: To ensure your CI/CD pipeline uses only validated dependencies, consistently enforce lockfile integrity during builds:
If the content hash of any dependency doesn't match the lockfile, the build process will fail, immediately highlighting the potential security issue:
Here is a quick review in an illustration below:
Integrating Vulnerability Scanning and SBOM Generation
Protecting your Flutter application’s supply chain requires proactive monitoring and comprehensive documentation of your dependencies. Here’s how to implement robust vulnerability scanning and maintain an accurate Software Bill of Materials (SBOM) to enhance security posture.
1. Vulnerability Scanning Integration
Integrating automated vulnerability scanning tools helps identify and mitigate security issues promptly. These tools continuously monitor your project's dependencies listed in your pubspec.lock, alerting you to potential vulnerabilities and recommending necessary actions.Popular tools for Flutter include:
Snyk: This offers robust integration with GitHub and other CI/CD tools, automatically scanning your dependencies for known vulnerabilities and providing actionable alerts and fixes.
GitHub Dependabot: Automatically scans your repository for outdated or vulnerable dependencies, generates pull requests to update them, and provides detailed vulnerability information.
To integrate Dependabot into your GitHub repository, create a file .github/dependabot.yml in your repository root:
This configuration ensures Dependabot checks your dependencies daily, automatically creating pull requests if vulnerabilities or updates are found, keeping your Flutter app secure and up-to-date.
2. Software Bill of Materials (SBOM) Generation
An SBOM is a detailed, machine-readable inventory of all software components included in your application, along with their respective versions. Maintaining an SBOM lets you quickly pinpoint vulnerabilities within your software components, significantly reducing response times during security incidents. Here are some benefits of the SBOM that I can think of:
Comprehensive visibility into your application’s dependencies.
Faster identification and remediation of vulnerable components.
Enhanced compliance with security standards and regulatory requirements.
CycloneDX is a widely used standard for generating SBOMs. You can automate SBOM creation in your CI pipeline using tools like CycloneDX CLI:
To use the CLI for Flutter projects, you can follow the steps below on macOS:
However, there are two more straightforward ways to generate SBOM.Either you can use cdxgen
or you can use sbom dart package. I have not used this one myself yet.Let's continue with cdxgen
This command generates an SBOM in JSON format (sbom.json), which you can store with your artifacts or utilize during security audits.Then, after generating this file we can analyze it with the following command:
Here is an example of the sbom.json
You can analyze your SBOM for security vulnerabilities using:
Combining Scanning and SBOM in Your CI/CD WorkflowIncorporating vulnerability scanning and SBOM generation into your CI/CD pipeline strengthens your supply chain security. Consider uploading and tracking the SBOM with proper tools from the CD/CI.Now, let's look at the Native and Dart runtime security aspects.
Native and Dart-Side Runtime Checks
Even with vigilant dependency management, you might still face tampering or repackaging. Flutter offers various ways to verify integrity at runtime:
1. Self-Integrity Verification
Packages like freeRASPand app_integrity_checker can retrieve the checksums and signing certificate details of your app. Compare these values against known-good references on your server.
2. Minimizing Trust in Third-Party Code
Limit Permissions: If a plugin doesn’t need sensitive permissions, don’t grant them. The OS sandbox can prevent malicious code from accessing off-limits features.
Feature Flags: Wrap calls to third-party SDKs or modules in toggles so you can remotely disable them if a supply chain breach is discovered.
3. freeRASP for Flutter
freeRASP by Talsec provides runtime application self-protection (RASP) features, including checks for:
Repackaging or signature changes
Debugger and hook detection
Root/Jailbreak detection
FreeRASP configuration is pretty straightforward, and in Flutter, the application is seamless.
Below is an example of using freeRASP in a real Flutter application.
If freeRASP detects suspicious activity (e.g., your app was re-signed), it triggers callbacks. You can warn users, disable certain features, or shut down the app to protect sensitive data.
Mitigating CI/CD Pipeline Vulnerabilities
Your CI/CD pipeline is critical infrastructure; a single vulnerability here can compromise your entire Flutter application. Protecting your build pipeline is just as important as protecting your source code. Here’s how you can comprehensively secure your pipeline:
1. Secure Your Build Environments
CI/CD environments should be tightly controlled to prevent unauthorized access and minimize attack surfaces:
Restrict Access: Limit who can access your CI/CD infrastructure and securely store sensitive credentials (signing keys, API tokens).
Ephemeral Build Agents: Utilize ephemeral (temporary) build agents or Docker containers that reset after each build, ensuring clean, uncontaminated environments.
Logging and Auditing: Enable comprehensive logging and auditing features to track changes in CI configurations and identify who triggered builds, facilitating rapid incident response.
2. Protecting Signing Keys and Certificates
Your app’s signing keys are highly sensitive; compromise means attackers could distribute malicious updates:
Android: Store your .jks keystore files securely outside source control, preferably using encrypted storage such as GitHub Secrets, AWS KMS, or HashiCorp Vault.
iOS: Store your .p12 certificates securely or leverage Apple’s automated code signing capabilities.
Here is an example from Github Action
3. Enforcing Artifact Integrity
Ensure the integrity of your build artifacts to detect any unauthorized changes:
Cryptographic Hashing: Generate cryptographic hashes (e.g., SHA256) for your build artifacts. Verify these hashes before deployment.
Artifact Signing: Utilize advanced signing tools like Sigstore or Cosign for cryptographic artifact signing, providing verifiable proof of authenticity and provenance.
4. Implementing Reproducible Builds
Achieving reproducible builds allows detection of unauthorized modifications:
Deterministic Environments: Pin exact Flutter and Dart SDK versions, dependencies, and environmental configurations.
Build Provenance: Create and maintain a Software Bill of Materials (SBOM) and integrate the SLSA framework to document build inputs and ensure reproducibility.
5. Manual Approval and Code Reviews
Implementing human oversight in your CI/CD processes greatly enhances security:
Manual Approval Steps: Even with automated deployments, integrate manual approval processes to provide additional verification points.
Peer Code Reviews: Enforce mandatory code reviews and pair programming for sensitive changes, especially CI configuration updates.
This workflow triggers a manual approval in GitHub before proceeding with the deployment.
CI/CD Security Checklist
I made this checklist for you to make it easier to ensure you are following best practices.
[ ] Use ephemeral build environments to ensure isolation.
[ ] Generate cryptographic hashes or signatures for build artifacts.
[ ] Achieve reproducible builds through deterministic configurations.
[ ] Implement SBOM generation and artifact signing tools.
[ ] Enforce manual approvals and thorough code reviews for critical deployments.
Other Protection Techniques
Adopting advanced security measures significantly strengthens your application against sophisticated supply chain threats. These techniques provide deeper assurances and comprehensive oversight of your Flutter app development and distribution processes.
1. Implementing the SLSA Framework (Supply Chain Levels for Software Artifacts)
The SLSA framework, developed by Google, defines incremental security maturity levels for software artifacts, ensuring transparency and trustworthiness in your build and deployment processes:
Level 2 - Signed Provenance: Sign your build artifacts cryptographically to prove authenticity.
Level 3 - Auditable Builds: Conduct your builds in controlled, secure environments, ideally ephemeral or isolated.
Level 4 - Hermetic and Reproducible: Achieve fully reproducible builds with high security standards.
Here is a hypothetical example of achieving Level 2 compliance in Flutter CI using Sigstore:
2. Reproducible Builds
Reproducible builds allow you to verify that the same source code always produces an identical binary. This enables the detection of tampering and unauthorized modifications:
Pin exact versions of your Flutter and Dart SDKs.
Use standardized Docker or VM environments for consistency across builds.
Here is an example of a deterministic Docker setup:
3. Continuous Monitoring and Auditing
Continuously audit your dependencies and pipeline:
Regularly review dependency changes for unusual patterns.
Integrate automatic alerts for dependency or artifact integrity issues.
4. Emergency Response Planning
Have a well-defined response plan in case of supply chain compromise:
Prepare rapid revocation strategies for compromised credentials.
Implement remote kill-switches or feature flags to disable compromised components quickly.
Advanced Protection Techniques Checklist
Considering what we have learned so far, I have prepared a checklist that you can use to evaluate your process.
[ ] Adopt SLSA principles progressively.
[ ] Establish reproducible and deterministic builds.
[ ] Regularly generate and store SBOMs.
[ ] Implement continuous security scanning and monitoring.
[ ] Use cryptographic signing and artifact attestation.
[ ] Plan and rehearse an emergency response strategy.
Conclusion
Inadequate Supply Chain Security (OWASP M2) goes beyond just picking “safe” packages—it’s about securing the entire lifecycle of your Flutter app, from development to distribution. Attackers increasingly target the supply chain to inject malicious code or tamper with final builds.
Harden Dependencies: Vet packages, lock versions, and monitor vulnerabilities.
Embed Runtime Protection: Tools like freeRASP can detect tampering, ensuring the app your users run is the one you built.
Secure Your CI/CD: Lock down secrets, sign artifacts, and enforce integrity checks in every pipeline stage.
Adopt Advanced Techniques: SLSA, reproducible builds, and SBOMs can give you deeper assurances against hidden threats.
Stay proactive—attackers evolve, and supply chain security must constantly adapt in response. Implement these strategies today to ensure your Flutter apps remain safe, reliable, and worthy of your users’ trust.
Majid Hajian - Azure & AI advocate, Dart & Flutter community leader, Organizer, author
OWASP Top 10 For Flutter – M4: Insufficient Input/Output Validation in Flutter
Welcome back to our deep dive into the OWASP Mobile Top 10, explicitly tailored for Flutter developers. In our last article, we tackled M3: Insecure Authentication and Authorization, exploring how lapses in identity and permissions checks can lead to serious breaches.
Today, we shift gears to M4: Insufficient Input/Output Validation, arguably one of the most pervasive and deceptively simple risks in any application. Last year, a popular finance app accidentally let users paste SQL payloads into its search bar, wiping out months of user data overnight. That’s the exact kind of risk M4 warns us about.
Even the most bulletproof authentication logic can be undone instantly if your app blindly trusts data crossing its boundaries. In a typical Flutter project, you’re juggling form inputs, HTTP APIs, deep links, platform channels, and WebViews, each a potential entry point for malformed or malicious data.
Over the following sections, we’ll define what OWASP means by “Insufficient Input/Output Validation,” and arm you with practical Dart and Flutter examples to lock down every trust boundary. Let’s dive in!
Picture your Flutter app as a fortress. In our , we built high walls around identity and sessions, but what about the gates where data flows in and out? M4 is all about guarding those gates.
When OWASP talks about , they mean two complementary practices:
Input Validation: Checking every piece of incoming data (user input, API responses, deep links, messages from native code) to ensure it matches the exact shape and constraints you expect.
Output Sanitization: Cleaning or encoding data before it leaves your app (when you render it in the UI, send it back to a server, or hand it over to native code), so no hidden threats slip through.
Think of input validation as inspecting every carriage entering the fortress, no weapons, no stowaways. Output sanitization is like searching messengers before they depart, ensuring they carry no secret orders for sabotage.
Common trust boundaries in a Flutter app include:
Forms and TextFields where users type data
HTTP requests and responses from your backend
Deep links or app links that jump into specific screens
If any of these gates is left unchecked, attackers can inject SQL commands, sneak in scripts for XSS, manipulate file paths, or corrupt data. OWASP 2023 Mobile Top 10 ranks these flaws as with impact, meaning they happen frequently and can cause real damage.
Our goal in the following sections is to ensure that both checkpoints, validation, and sanitization are rock-solid so that malicious data can neither reach the heart of your app nor escape it.
How Validation Flaws Manifest in a Flutter or Dart App
This section explores four common patterns of M4 failures, explains why they’re dangerous, and provides clear examples of how to fix them. While many examples focus on the Flutter frontend, the principles apply equally to Dart-based backends. Once you understand the underlying issues and solutions, use them wherever needed in your Flutter and Dart codebases.
1. Insufficient Input Validation
Every time your app accepts free‑form data, from a search box to a deep link, it’s like inspecting every carriage at your fortress gate: "Every carriage must be searched for hidden weapons". Attackers can turn that innocent-looking bundle into a weapon if you let any unchecked item slip through.
A. SQL Injection in Local Databases
Imagine a simple search field wired straight into your SQLite database:
An attacker typing:
won’t see search results; they’ll crash your app and delete your users table. Local stores often hold sensitive profiles, tokens, or settings so that a single flawed query can expose or wipe everything.
A safer approach is to parameterize the query and allow‑list any dynamic parts:
Here, the ? placeholder keeps the user’s input separate from SQL code, and we verify sortColumn against a known list instead of concatenating it blindly.
B. Command Injection via Platform Channels
When Dart invokes native scripts, the stakes are just as high. Suppose you do:
If the native side executes:
then an input like:
could run arbitrary shell commands.
Treat the Dart–native boundary like another untrusted gate: check the path’s pattern before you send it:
And remember: always repeat similar validation in your native code too—never let Dart’s checks be the only line of defense.
C. Deep‑Link Parameter Poisoning
Deep links let you jump straight to a payment screen or a product detail page, but they also open a backdoor if you trust their parameters blindly:
An attacker could send:
or a non‑numeric string, crashing your parser or injecting malicious data.A safer flow is to parse, validate, and then sanitize:
You can even write integration tests or run a MITM proxy (like Charles) to feed malformed links and ensure your app rejects them cleanly.
D. Syntactic vs. Semantic Validation
It’s not enough to check format (“can this parse?”); you must also check the meaning (“does it make sense?”). For example, validating date ranges:
Here, parsing ensures you won’t crash on invalid text; comparing dates enforces your business rule.
E. Unicode Normalization & Safe Character Sets
Free‑form text can include emojis, accents, and look‑alike characters. Normalize it so that “é” is always one code point, then allow only expected character classes:
This prevents hidden control codes or bypasses that sneak past naive filters.
F. Avoiding Regex Denial‑of‑Service
Complex regex patterns with nested quantifiers can lock up your UI if an attacker feeds crafted input. Always anchor and simplify:
Test any new regex against long, repetitive strings in a small Dart script to confirm it completes in milliseconds.
G. Client‑Side File Upload Validation
If your app lets users pick images or documents, don’t send them straight to the server:
Extension allow‑list: .jpg, .png, .pdf
Size limit: e.g., 5 MB
Content sniffing
H. Canonicalize Before You Validate
Attackers often use percent‑encoding or Unicode variants to slip past allow‑lists. Always decode into one canonical form, then apply your checks:
%2e%2e vs ..: Without decoding, RegExp(r'^[\w\-]+\$') might miss a path‑traversal attack.
Mixed‑width Unicode characters can hide dangerous sequences if you validate on raw code units.
Making canonicalization the first step of every validation routine ensures your regex and length checks see the actual data, not a cleverly encoded variant.
I. NoSQL / GraphQL Injection
While SQL injection is well‑known, modern apps often talk to NoSQL stores or GraphQL endpoints. Unvalidated inputs in query objects or GraphQL queries can let attackers manipulate filters or execute unauthorized queries.
To fix it, allow‑list field names and validate types before constructing queries:
Beyond filter injections, be cautious when dynamically building Firestore document paths based on user input. If you let users control path segments like userId, projectName, or orderId without validation, they might access or overwrite data that doesn't belong to them, especially if your Firestore rules are too broad.
An attacker could craft inputProjectId as ../../admins/root, possibly navigating out of bounds or triggering unexpected rules behavior. The fix is to validate all Firestore path components explicitly.
Avoid path control characters like / or . unless absolutely necessary. Firestore treats document and collection paths as part of its access control model, so a poorly validated string can become a security bug.
J. JSON Schema Validation
While you can create your own JSON validator, using a package might work for you too. For complex payloads, embed a JSON Schema validator to enforce structure:
All in all, there is one thing I want to emphasis again.Client‑side vs. Server‑side ValidationAll these checks in your Flutter app are essential for immediate feedback, but they can be bypassed by a determined attacker (e.g., by intercepting traffic with a proxy).Always replicate critical validation rules on the server before processing or storing any data. Use client‑side validation for a smooth UX, but enforce the same strict type, format, length, and semantic checks in your backend to guarantee security. to read more.
2. Insufficient Output Sanitization
Even if input is clean, output can become dangerous when sent to contexts that interpret it, particularly HTML, CSV, or logging systems.
A. XSS in WebView
Loading unfiltered HTML is a direct path to script execution:
Instead:
Sanitize HTML before loading:
Disable JavaScript if possible:
Whitelist tags and attributes if JS is needed.
B. CSV Injection
If you export user data into a CSV, spreadsheet apps may treat cells starting with = or - as formulas:
To prevent this, prefix dangerous cells with a single quote:
C. Unsafe URL Generation
When building query URLs:
Always use Dart’s Uri helpers:
With unsafe outputs now neutralized, let’s turn next to how context shapes what “safe” really means.
3. Lack of Contextual Validation
A string that’s safe in one scenario might be fatal in another. Context is king.
A. File Path Traversal
If fileName is ../../etc/passwd, you could read system files (on rooted devices).A safe pattern:
B. Dynamic UI Generation
If you let the server send widget definitions as JSON, a malformed request could crash your app or introduce logic flaws. Always validate the JSON schema before mapping it to widget code. Check "JSON Schema Validation" section to learn more how you can do that.
C. Accessibility & Localization
Don’t forget to validate user inputs for different locales (dates, numbers) and ensure error messages and validation cues are announced via Flutter’s accessibility widgets (e.g., `Semantics`, `SnackBar`) so all users get clear, localized feedback.Here is an example for inspiration.
4. Failure to Validate Data Integrity
Even well‑formed and properly encoded data can be altered once stored or cached. You must verify it hasn’t been tampered with.
A. SharedPreferences Flag Tampering
B. Verifying Remote Config or Assets
If you download a JSON config or an asset at runtime (for feature toggles, theming, etc.), use a signature or checksum provided by your server. After download:
Compute the SHA‑256 of the payload.
Compare it against the checksum you received over a secure channel.
If they don’t match, reject the payload and fall back to defaults.Now that we’ve secured every gate for inputs, outputs, and stored data, it’s time to adapt these defenses to platform‑specific rules.
Platform‑Specific Nuances
Even though Flutter gives us a single codebase, Android and iOS (and Windows, Web, macOS, and Linux) enforce different rules under the hood. Your validation strategy must account for those differences.
1. File System & Sandbox
iOS: Apps live in a strict sandbox. Even if you allow ../ in a filename, iOS will block access outside your container.
Android: Modern Android uses scoped storage, but if you request legacy or external‑storage permissions, a path‑traversal attack can hit shared directories.
To mitigate:
Always use path_provider to get the correct app directory.
Normalize and strip directory parts with path.basename().
Target-scoped storage on Android and avoid broad storage permissions unless absolutely necessary.
2. Intents & URL Schemes
Flutter plugins like uni_links or receive_sharing_intent let you handle incoming data:
Tip: On Android, set android:autoVerify="true" in your AndroidManifest.xml to reduce phishing via fake intents, but Dart-side validation remains essential.
3. WebView Differences
Android WebView (Chromium-based) lets you disable file access and set Safe Browsing flags, but enabling JavaScript will run any script in loaded HTML.
iOS WKWebView respects Content Security Policies if you inject them, but will execute JavaScript if enabled.
Secure WebView setup for both platforms:
If you must enable JavaScript (e.g., for interactive widgets):
Sanitize the HTML.
Restrict JS bridges (JavaScriptChannel on Android, message handlers on iOS) to known methods.
Clear cache and data on logout to remove leftover scripts or cookies.
4. Native Code & MethodChannels
Platform channels let Dart talk to Kotlin/Java or Swift/Objective-C, another trust boundary that needs validation on both sides.
On the Kotlin side:
Use Pigeon to auto‑generate type-safe stubs and eliminate a whole class of runtime type errors.
By understanding sandbox models, intent handling, WebView quirks, and native bridges, you can tailor your validation strategy to close every gap.
Completing OWASP’s Prevention Checklist
OWASP’s “How Do I Prevent M4?” is built on these six pillars. So far, we’ve examined Input Validation, Output Sanitization, Context‑Specific Validation, and Data Integrity. Two more critical pieces remain: Secure Coding Practices and Continuous Security Testing & Maintenance.
1. Secure Coding Practices
Unsafe APIs can undo even rock-solid validation. Elevate your code quality by relying on high‑level, type‑safe libraries and avoiding string concatenation for any data that reaches a lower layer.
A. Parameterized Queries & ORMs
Rather than hand‑crafting SQL strings, use an ORM like Drift (formerly Moor). Drift auto‑parameterizes queries and provides Dart types with compile‑time checks:
With this approach, SQL injection is impossible, and your data layer is much easier to maintain.
B. Safe URL & Path Construction
Building URIs or file paths by hand invites subtle bugs and vulnerabilities. Always use Flutter’s built‑in helpers:
Uri and the path package handle encoding and normalization, so you never accidentally slip unsafe characters into a URL or path.
2. Continuous Security Testing & Maintenance
Validation logic isn’t “set and forget.” You must ensure your defenses stay aligned as your app evolves—new fields, screens, and data flows. Here’s how to bake security checks into your workflow:
A. Fuzzing with Unit & Integration Tests
Write tests that feed known attack patterns into your validation and data‑handling code:
Extend these to widget tests, use WidgetTester or an external proxy to inject malformed deep links or simulate malicious user input.
B. Static Analysis & CI Integration
Automate linting and test runs so vulnerabilities never slip past a pull request:
Add security‑focused lints (e.g., flagging rawQuery usage or unrestricted JavaScriptMode.unrestricted) to catch risky code patterns before review.
Integrate a dynamic analysis tool (e.g., or ) into your CI pipeline to regularly scan your running app for input/output flaws and endpoint injection points. This is out of scope of this article and requires a dedicate article. I will tend to write about them but also let me know if this is interesting to you so I can focus on prioritizing it for you.
If you're using Firebase Cloud Functions as your backend, don’t rely on client-side validation alone. Functions receive raw data directly from users, and even though the UI might enforce types and formats, an attacker can bypass the frontend and call the function directly using tools like Postman or a custom app.
This opens the door to abuse if the amount is not a number, or it's negative, or too large.Always validate critical fields on the backend, even if they've already been checked on the client:
Whether it’s a Cloud Function or a Firestore path, your server-side Firebase logic should replicate the same rigorous checks as your Flutter app.
C. Validation Coverage & Maintenance
As OWASP warns under “Improper Data Validation,” adding a new form field and forgetting its validator is easy. Prevent that drift by centralizing and testing your validators:
Then verify coverage with a unit test:
And never skip calling _formKey.currentState!.validate() on submit—otherwise, none of your carefully written validators will run.
Quick Checklist
Before we end this article, I made a quick checklists for you to ensure you are on top of the security for your application to comply the M4 best practices.[ ] All user inputs run through allow‑lists and type checks[ ] Outputs to HTML/CSV/URLs are sanitized/escaped[ ] Percent‑encoding & Unicode are canonicalized before validation[ ] NoSQL/GraphQL filters validated against field‑name allow‑lists[ ] Integrity of persisted/cached data verified (HMAC/checksum)[ ] CI runs both static (lint/SAST) and dynamic (fuzz/DAST) tests[ ] Firestore paths are validated and encoded before use[ ] Firebase Cloud Functions enforce server-side schema validation
Conclusion
Remember that finance app wiped out by a single SQL payload? By layering strict input checks, context‑aware rules, and continuous testing, you build a fortress where no malicious data can slip in or out. Until then, keep validating like a fortress guard, no unchecked carriage should pass!Stay tuned for our next article on M5: Insecure Communication.
dependencies:
fake_http: ^2.0.0
// example
class HttpClient {
Future<Response> post(Uri url, {dynamic body}) async {
// Hidden malicious code
exfiltrateUserData(body);
// Actual HTTP POST
return await realHttpPost(url, body);
}
}
Majid Hajian - Azure & AI advocate, Dart & Flutter community leader, Organizer, author
// BAD: trusts user input directly in SQL
Future<List<Map<String, dynamic>>> findUsers(String name) async {
return await db.rawQuery(
"SELECT * FROM users WHERE name = '$name'"
);
}
Robert'); DROP TABLE users;--q
const allowedCols = ['name', 'email'];
if (!allowedCols.contains(sortColumn)) {
throw Exception('Invalid sort column');
}
final rows = await db.rawQuery(
'SELECT * FROM users WHERE name = ? ORDER BY $sortColumn DESC',
[name]
);
// BAD: hands raw user input to native exec
await platform.invokeMethod('runScript', {'path': userInputPath});
Runtime.getRuntime().exec("sh " + path);
/tmp/myscript.sh; rm -rf /
final path = userInputPath;
if (!RegExp(r'^[\w\/\-]+\.sh\$').hasMatch(path)) {
throw Exception('Invalid script path');
}
await platform.invokeMethod('runScript', {'path': path});
// BAD: assumes valid numbers and names
final amount = double.parse(uri.queryParameters['amount']!);
processPayment(amount, uri.queryParameters['to']);
final amtParam = uri.queryParameters['amount'] ?? '';
final amount = double.tryParse(amtParam);
if (amount == null || amount <= 0 || amount > 1e6) {
return showError('Invalid payment amount');
}
final to = uri.queryParameters['to'] ?? '';
if (!RegExp(r'^[A-Za-z0-9_ ]{1,32}\$').hasMatch(to)) {
return showError('Invalid recipient');
}
processPayment(amount, to);
DateTime parseDate(String s) {
try {
return DateTime.parse(s);
} catch (_) {
throw FormatException('Use YYYY-MM-DD format');
}
}
void validateDateRange(String start, String end) {
final s = parseDate(start);
final e = parseDate(end);
if (s.isAfter(e)) {
throw Exception('Start date must come before end date');
}
}
import 'package:characters/characters.dart';
String normalize(String input) => input.characters.toString();
bool isValidComment(String input) {
final text = normalize(input);
final safe = RegExp(r"^[\p{L}\p{N}\s\.,!?'\-]+\$", unicode: true);
return safe.hasMatch(text);
}
// BAD: catastrophic backtracking
final badRegex = RegExp(r"^(a+)+\$");
// GOOD: explicit length, no nested quantifiers
final safeRegex = RegExp(r"^[A-Za-z0-9]{1,32}\$");
import 'dart:io';
import 'package:path/path.dart' as p;
import 'package:image/image.dart' as img;
Future<File> prepareUpload(File file) async {
final ext = p.extension(file.path).toLowerCase();
if (!['.jpg', '.png'].contains(ext)) {
throw Exception('Only JPG/PNG images allowed');
}
if (await file.length() > 5 * 1024 * 1024) {
throw Exception('Image must be under 5 MB');
}
final bytes = await file.readAsBytes();
if (img.decodeImage(bytes) == null) {
throw Exception('File is not a valid image');
}
final safeName = '${DateTime.now().millisecondsSinceEpoch}$ext';
return file.copy(p.join(p.dirname(file.path), safeName));
}
// Suppose you get a percent‑encoded name
final rawName = uri.queryParameters['user'] ?? '';
// 1. Decode percent‑encoding
final decodedName = Uri.decodeComponent(rawName);
// 2. Normalize Unicode
final normalized = decodedName.characters.toString();
// 3. Apply your allow‑list
final namePattern = RegExp(r'^[A-Za-z0-9_ ]{1,32}\$');
if (!namePattern.hasMatch(normalized)) {
return showError('Invalid user name');
}
// BAD: builds a Firestore query from untrusted map
final filter = jsonDecode(userJson);
final snapshot = await FirebaseFirestore.instance
.collection('orders')
.where(filter['field'], isEqualTo: filter['value'])
.get();
const allowedFields = ['status', 'customerId'];
final field = filter['field'];
if (!allowedFields.contains(field)) {
throw Exception('Invalid query field');
}
final value = filter['value'];
// ensure the right type, e.g. String or int
if (value is! String && value is! int) {
throw Exception('Invalid query value type');
}
final snapshot = await FirebaseFirestore.instance
.collection('orders')
.where(field, isEqualTo: value)
.get();
// BAD: user controls Firestore path
final doc = FirebaseFirestore.instance.doc('projects/$inputProjectId');
final id = inputProjectId;
if (!RegExp(r'^[a-zA-Z0-9_-]{1,28}$').hasMatch(id)) {
throw Exception('Invalid project ID');
}
// BAD: runs all `<script>` tags in userHtml
_webViewController.loadHtmlString(userHtml);
// Use a sanitization library like html_unescape or sanitize_html
// import 'package:html/parser.dart' as html;
final cleanHtml = sanitizeHtml(userHtml);
import 'package:crypto/crypto.dart';
import 'dart:convert';
String computeHmac(String value) {
final key = utf8.encode('APP_SECRET_KEY');
return Hmac(sha256, key).convert(utf8.encode(value)).toString();
}
// Saving:
prefs.setString('isAdmin', 'false');
prefs.setString('isAdmin_hmac', computeHmac('false'));
// Reading:
final val = prefs.getString('isAdmin')!;
final mac = prefs.getString('isAdmin_hmac')!;
if (computeHmac(val) != mac) {
// graceful fallback instead of crash
print('⚠️ Data integrity check failed, reverting to secure default');
return false;
}
// Example using uni_links
void _handleIncomingLink(Uri uri) {
// OS guarantees correct scheme, but query params remain untrusted
final action = uri.queryParameters['action'] ?? '';
if (!['view', 'edit', 'share'].contains(action)) {
return _showError('Unknown action');
}
// Process only after validating every parameter
}
_webViewController
.setJavaScriptMode(JavaScriptMode.disabled) // Turn off JS if possible
.setBackgroundColor(const Color(0x00000000))
.loadRequest(Uri.parse('https://trusted.domain'));
// Dart side (GOOD)
final result = await platform.invokeMethod('getConfig', {'key': configKey});
if (result is! String || result.length > 256) {
throw Exception('Invalid config from native');
}
// Define your Users table
class Users extends Table {
IntColumn get id => integer().autoIncrement()();
TextColumn get name => text()();
TextColumn get email => text()();
}
// In your database class:
Future<List<User>> findUsersByName(String name) {
// Drift ensures 'name' is a parameter, not part of the SQL code
return (select(users)..where((u) => u.name.equals(name))).get();
}
// Safe HTTP URL
final uri = Uri.https(
'api.example.com',
'/search',
{'query': userInput}, // automatically percent‑encoded
);
// Safe file path
import 'package:path/path.dart' as p;
final safeName = p.basename(userInputFilename);
final file = File(p.join(appDir.path, safeName));
// test/security_fuzz_test.dart
import 'package:flutter_test/flutter_test.dart';
import 'package:my_app/database.dart';
void main() {
final db = MyDatabase();
final payloads = [
"Robert'); DROP TABLE users;--",
"<script>alert(1)</script>",
"../etc/passwd"
];
for (var p in payloads) {
test('findUsersByName rejects `$p`', () async {
expect(
() => db.findUsersByName(p),
throwsA(isA<Exception>()),
);
});
}
}
Picture this, you’re in a cozy café, laptop open, integrating a new feature. You join the free Wi‑Fi, hit Run, and data starts to flow. What you don’t see is someone on that same network quietly capturing every byte, login tokens, profile calls, even payment info, because it’s traveling like postcards through a crowded street. That’s the heart of insecure communication.
This article isn't just about fixing bugs, it's about understanding how data is exposed, how attackers think, and how you can prevent silent breaches before they start.Then, let's get started
What Is “Insecure Communication”?
It’s every moment when your data can be watched, intercepted, or altered while in transit.Insecure communication shows up any time the bytes leaving your Flutter app—or your Dart backend—can be read, replayed, or rewritten by someone who wasn’t supposed to see them. It’s not just about typing https:// in your URLs. It’s about every transport your app relies on and every trust decision your code makes along the way.
Remember that café scene? If your request flies in the clear, the stranger at the next table can read or even modify it. And even when we think we’re being careful, we sometimes stub out certificate checks during testing, accept sketchy proxies during debugging, or log sensitive payloads in places we shouldn’t. These shortcuts pull us right back into danger—even if we’re technically using HTTPS.
Core Channels
Now that we’ve seen what insecure communication looks like in practice, let’s break down the channels where your data flows. We’ll get to code shortly, but first, here’s where risks tend to hide.These are the lifelines of your app. If any of them are exposed, everything else downstream is vulnerable.REST / GraphQLAlways use https://. Never put credentials or tokens in query strings—send them in headers or the body.
WebSocketsUse wss://, never ws://. Apply the same certificate validation and pinning logic as your REST layer.
If you’re using GraphQL subscriptions with graphql_flutter, confirm the client connects over wss:// and inherits your validation logic. No exceptions.
Raw TCP & gRPCEncryption is not optional. For raw sockets, wrap in SecureSocket. For gRPC, use ChannelCredentials.secure() and pin the server cert like you would for any HTTPS call.
SMS One-Time CodesAvoid them where possible, they’re vulnerable to SIM swaps and silent interception. If you must use them, expire them quickly and add detection for suspicious activity.Bluetooth & NFCPairing doesn’t mean encryption. Use BLE Secure Connections for Bluetooth, and encrypt NFC payloads end-to-end.
Who’s Listening?
No matter which transport you choose, if your app sends data in the clear—or trusts the wrong certificate—you’re vulnerable. Here are the common actors waiting to take advantage:
Passive listeners on public or compromised networks
Active man-in-the-middle attackers who intercept or inject traffic
Even one misconfigured transport can expose your entire session.Next, we’ll dive into the most common and preventable mistake: letting http:// endpoints sneak into production.
From HTTP to HTTPS
The fastest way to lose user trust, and data, is to let a single http:// endpoint slip into production. Before we talk about certificate pinning or advanced validation, we need to eliminate the most basic mistake: allowing unencrypted traffic in the first place.
Why Plain HTTP Is a Silent Killer
Using http:// is like taping your house key to the front gate—anyone walking by can grab it. Traffic moves in cleartext: tokens, cookies, form data, even search queries. All of it is readable to anyone on the same network, or logged by any proxy in the chain.And worse? A man-in-the-middle doesn’t even have to “break” encryption—because there isn’t any.The fix isn’t glamorous, but it’s non-negotiable:Encrypt every byte in transit and refuse to speak plaintext, ever.
Get a Certificate (Two Paths)
If you’re running a Dart backend—whether with shelf, HttpServer, or another server framework—you need a TLS certificate to serve HTTPS. This allows your Flutter app to connect securely, validate the server’s identity, and lay the foundation for things like certificate pinning.
Let’s look at two common paths: one for production deployments, and one for local development. You can also get certificates from a cloud provider or third-party CA—more on that below.
Option 1: Trusted Certificate (Let’s Encrypt or Third-Party)
For production, use a publicly trusted certificate from:
(free, automated)
Your cloud provider (e.g., AWS ACM, Google Managed Certs, Azure App Service)
A third-party certificate authority like DigiCert, GlobalSign, or ZeroSSL
Here’s how to set one up using Let’s Encrypt and Certbot on Ubuntu:
To test auto-renewal:
Certificates will be saved to:
You’ll reference these in Dart using SecurityContext() when starting your server.
💡 If you’re using a cloud platform (e.g., AWS, GCP, Azure), they may handle TLS termination for you. In that case, your Dart backend only sees HTTPS traffic forwarded as HTTP from the load balancer.
Option 2: Self-Signed Certificate (for Local Development)
For local testing and emulator development, a self-signed cert works fine. It’s not trusted by browsers or real devices—but that’s okay in dev. You’ll still benefit from full HTTPS support in your backend and Flutter app.To generate one using OpenSSL:
This creates two files:
localhost.crt — the certificate
localhost.key — the private key
Save these in your project directory or anywhere accessible by your Dart server.
Serve HTTPS in Dart with Shelf
Here’s how to use either cert type in a Dart backend with the shelf package:
⚠️ If you're using a self-signed cert, your Flutter app may reject the connection unless you override validation during development (more on that in section 4).
Here is now how we run our application with https supported.
While this seems to be more backend related, but we can also make sure our Flutter is adhering to best practices.
Lock Your App to HTTPS Only
Ensuring that your app refuses to connect to any HTTP endpoints is crucial. This prevents your app from accidentally leaking sensitive data over unencrypted channels. Let’s configure both Android and iOS to enforce HTTPS-only communication and block any insecure traffic.
Android:
Android gives you granular control over cleartext (non-HTTPS) traffic through Network Security Config. This is where you tell the app to only use HTTPS for production traffic, while allowing cleartext only for certain cases (like local development).
Create or modify the network_security_config.xml file in your project:
cleartextTrafficPermitted="false": This ensures that cleartext traffic (i.e., http://) is blocked for all domains.
The domain field specifies that only secure traffic (https://) is allowed for api.yourdomain.com and its subdomains.
Reference the network_security_config.xml file in your AndroidManifest.xml:
Path: android/app/src/main/AndroidManifest.xml.
Inside the <application> tag, add:
This step ensures that your app adheres to the defined network security rules.
Need to Allow http://10.0.2.2 for Local Development (Emulator)?For local development on Android, the emulator uses http://10.0.2.2 as the host for local services. To allow cleartext traffic for this during debugging, add a second domain-config block under the <network-security-config>:
let me give you a quick tip,wrap this configuration in a debug-only resource to ensure that it doesn't end up in production builds.
iOS: App Transport Security (ATS)
iOS enforces secure connections by default through App Transport Security (ATS), blocking any cleartext (HTTP) traffic. However, sometimes you might need to allow insecure connections during development, especially for local testing or non-production services.
Modify your Info.plist file (located at ios/Runner/Info.plist):
Add or modify the following ATS settings:
This configuration:
Disables arbitrary loads (HTTP connections) for production builds.
Allows insecure connections (HTTP) only for localhost, useful when you're testing locally during development.
Test your setup:
Build a release version of your app (via flutter build ios), then try hitting an http:// URL. You should see it fail as expected—indicating that your app is correctly enforcing HTTPS.
The One‑Line Secure Client in Flutter
You don’t need complex setups to ensure secure HTTP requests. Dart’s http package uses default TLS validation, so you can trust it right out of the box. Here’s how you make a secure request with it:
This simple line of code ensures that:
The request will only succeed if the connection is encrypted (via HTTPS).
If the server presents an invalid or expired certificate, Dart will throw a handshake exception, preventing the connection from being established.
Your app will fail closed (securely) rather than silently accepting a downgraded insecure connection.
If you need additional layers of security, like certificate pinning or advanced validation, you can wrap this client using IOClient from the http/io_client.dart package, giving you finer control over certificate handling.
Before diving into certificate pinning or enforcing strict policies, it’s essential to understand how TLS (Transport Layer Security) works in Dart and Flutter by default. TLS is the foundation of secure communication, and it’s crucial to know how your app handles it when performing HTTPS requests.
How Dart’s HttpClient Validates Certificates by Default
When you call http.get(...) or use Dart’s HttpClient, Dart automatically performs a standard TLS handshake to ensure the connection is secure:
ClientHello: Your app initiates the handshake by suggesting which protocol versions (e.g., TLS 1.2, TLS 1.3) and cipher suites it supports.
ServerHello & Certificate: The server responds with its chosen protocol and cipher suite, along with its certificate chain.
Validation: Dart’s HttpClient performs the following checks on the server’s certificate:
The Simple, Secure Call
Under the hood, Flutter’s http package uses IOClient, which delegates to the same HttpClient logic. The simplest, most secure call looks like this:
With this setup, you don’t need any additional code to validate certificates. Just use https:// for secure communication and ensure you don’t disable certificate validation callbacks.
Understanding SecurityContext in Dart
In Dart, is used to configure SSL/TLS settings when establishing a secure connection, such as for HTTPS requests. It’s an essential part of managing certificates and enforcing security protocols.Let’s break it down step-by-step:A object stores and manages the certificates and keys used for secure communication (like HTTPS or gRPC) in your Dart server or client.It can:
Store certificates to validate servers (e.g., SSL/TLS certificates).
Store a private key for your server when acting as a server.
Control which TLS protocols to use (like enforcing TLS 1.3).
You can create and configure a SecurityContext in Dart to use it with an HttpClient or HttpServer when making or accepting HTTPS requests.For example, here’s how you load a certificate chain and a private key into a SecurityContext:
useCertificateChain: Loads the certificate chain from a file (in PEM format). The certificate chain can include the server's certificate and any intermediate certificates up to a trusted root certificate.
usePrivateKey: Loads the private key used for the server's certificate. This key is crucial for secure communication, as it enables the server to prove its identity.
💡 Tip: For local development, you may use self-signed certificates for testing. Just ensure the server trusts them by adding client.badCertificateCallback or using an assert() for dev-mode certificates.
Why SecurityContext Matters
In production, you should never disable certificate verification. Doing so opens the door to severe security risks, such as man-in-the-middle (MITM) attacks, where an attacker could intercept and modify your traffic.SecurityContext provides a secure, flexible, and powerful way to manage SSL/TLS connections in Dart. By configuring it properly, you ensure your app can securely connect to remote servers while avoiding common pitfalls.
Enforcing TLS Versions
You can enforce the use of specific TLS versions (e.g., TLS 1.3) by configuring the SecurityContext. This is useful to make sure your app only uses the most secure and up-to-date protocols.
You can also define ALPN (Application-Layer Protocol Negotiation) to ensure certain protocols are used, like HTTP/2:
This ensures that your app negotiates the best available protocol for secure communication.
Safe Dev-Mode Overrides for Self-Signed Certificates
During development, you may need to connect to a local server with a self-signed certificate (common for testing). Instead of disabling validation globally (which is dangerous), you can apply a scoped override that only activates for your local development server and only in debug builds.Here’s how you can safely trust your local dev server during testing:
The assert() ensures that this override only happens in debug mode (i.e., during development). The override will be stripped out in release builds, preventing any accidental trust issues.
client.badCertificateCallback allows your app to trust the server, even if the certificate is self-signed, but only if the host is 10.0.2.2 (the default for local development in Android emulators).
Why “Trust-All” Is a Recipe for Disaster
It might seem tempting to fix the CERTIFICATE_VERIFY_FAILED error by blindly accepting all certificates, like so:
However, this is extremely dangerous. What you’ve just done is disable all certificate validation. This effectively turns HTTPS into plain HTTP, leaving your app wide open to man-in-the-middle (MITM) attacks. Any attacker could present a fake certificate, and your app would blindly trust it.It’s like leaving your front door wide open and assuming no one will walk in. You won’t see the attacker coming, and they can capture everything you send or receive.
Certificate Pinning
Certificate pinning adds an extra layer of security to your app by hard-wiring trust. Instead of relying on the OS trust store to validate certificates, pinning ensures that your app only trusts the specific certificate or public key it was shipped with.
This makes it much harder for attackers to intercept or manipulate traffic, even if they manage to install a rogue certificate authority (CA) on the device.
Why and When to Use Certificate Pinning
Extra Security Layer: Pinning ensures that your app will only trust a specific certificate or public key for a particular domain.
When to Use: Pinning is essential for apps that handle sensitive data, like those in finance, healthcare, or any domain that makes your app a target.
Downsides: Pinning requires operational overhead. You need to rotate pins before the certificate changes, or users will lose connectivity. Always
Check out Talsec's premium !
Why should you choose Dynamic TLS Pinning over the static certificate pinning?
Implementation of certificate pinning will usually use certificates hard-coded in applications. This approach will enforce both the rebuild of an application and the update for users when the hardcoded certificate is about to expire or is revoked. In applications that are pinning multiple certificates, this enforcement may occur too often
Export the Server Certificate and SPKI Fingerprint
The first step in pinning is obtaining the certificate or public key you’ll pin to. You can extract the server’s certificate and its SPKI fingerprint using OpenSSL.
Dump the server's certificate (replace api.yourdomain.com with your target domain):
Generate the SPKI hash (the public key’s SHA-256 hash):
Save the Base64 hash: This is the SPKI fingerprint you’ll pin.
Manual Pinning with SecurityContext
Once you have the certificate or SPKI hash, you can manually configure your app to only trust this certificate.
Add server_cert.pem to your assets and declare it in pubspec.yaml:
Create a pinned HttpClient that uses your certificate:
setTrustedCertificatesBytes() ensures only the certificates you’ve added are trusted.
badCertificateCallback is used to reject any certificate not in the pinned certificate list.
Pinning by Fingerprint in a Callback
If you prefer to pin using the SPKI fingerprint instead of the full certificate, you can use the certificate’s hash directly in the badCertificateCallback.
This approach avoids storing the full certificate and directly compares the certificate's hash against the expected value.
Wrap the HttpClient in IOClient and use it with the http package just like before.
Using http_certificate_pinning for Less Boilerplate
If you want to simplify the pinning process, you can use the package, which reduces the amount of boilerplate code needed.
This package abstracts away much of the manual setup and makes pinning easier to implement.
Rotation Strategy and Gotchas
Ship at least two pins: Always have both the current and next certificate pinned, so you can smoothly rotate certificates.
Schedule renewals: Coordinate with your backend dev-ops to ensure the new certificate is live before the old one expires. Test failure scenarios by pointing your app to a server with an unpinned certificate to make sure it refuses to connect.
Obfuscate your pins if you’re worried about reverse engineering, but remember that an attacker with full device control can still bypass pinning. Pinning raises the bar, but it’s not a bulletproof shield.
With certificate pinning in place, your app will refuse to connect to impostor servers, even if they present a seemingly valid certificate. Pinning ensures that only the expected certificate or public key is trusted, adding defense in depth to your app’s security.
Securing Real‑Time Channels (WebSockets)
When your app needs instant updates—whether it's for chat, live dashboards, payments, or presence data—you’ll likely use WebSockets. They keep the connection alive and feel magical for real-time interactions. But don’t forget: ws:// is unencrypted and sends everything in plain text.Short take: Treat WebSockets exactly like HTTPS requests—TLS is mandatory, pinning and validation carry over, and never silently fall back to an insecure URL.
ws:// vs wss://
ws://: WebSockets over plain HTTP—insecure, unencrypted, and prone to MITM (Man-In-The-Middle) attacks. Anyone on the same network can read or inject data.
wss://: WebSockets over TLS—secure, encrypted, and ensures server identity and data integrity, just like HTTPS.
When working with real-time traffic, remember: it’s as sensitive as REST traffic. Always use wss://, especially when transmitting sensitive data (chat, payments, user info). Never send sensitive data via ws://.
Secure WebSocket Client in Flutter (No Extra Packages)
The web_socket_channel package, part of the Flutter standard toolbox, supports WebSocket connections and allows you to pass a custom HttpClient—so you can reuse the pinning logic from Section 5.Here’s how you can create a secure WebSocket connection with certificate pinning:
Key Rules for WebSocket Security
Never deploy ws://—always use wss:// for production. Strip out any ws:// references in your release configurations.
If the server uses a self-signed certificate in staging, ensure it’s only trusted during debug builds. Production must fail closed if the certificate is invalid.
Backend Setup with Dart (dart_frog or shelf)
If you’ve followed Section 3 and your backend already serves HTTPS with a valid certificate, WebSocket connections inherit the same secure setup. Here’s how to handle WebSocket upgrade requests in a Dart server (using shelf or dart_frog):
As long as the listener runs on port 443 and is using the proper SSL/TLS certificate, the WebSocket will automatically use wss://.
Testing the Failure Mode
To ensure everything works securely, test the failure mode:
Install mitmproxy on a test device and install its root certificate.
Modify your app’s config to point at https://realtime.yourdomain.com.
If your WebSocket pinning is set up correctly, the connection should fail with a handshake error when the invalid certificate is presented.
With secure WebSocket connections (wss://), your app can safely transmit real-time data just as it does with HTTPS. Pinning certificates and ensuring strong validation reduces the risk of MITM attacks and ensures data integrity.
Protecting Data Inside the Tunnel
While TLS ensures that your data is protected in transit, you still need to be mindful of where you store sensitive information in your app, and how it is handled on the device. Storing secrets in the wrong place or accidentally logging sensitive data can undermine your security. Let’s break down best practices for safe payload design, secure storage, and leak-free logging.
Keep Secrets out of URLs
Never put sensitive data (like tokens) in URLs. Query strings are easily logged in proxies, crash reports, or even screenshots. The following is a bad example:
Instead, use headers or JSON bodies to transmit sensitive data:
Tokens or credentials in URLs can easily be captured by intermediate services, proxies, or even browser history.
Check out Talsec's ultimate solution for secure communication between app and backend:
Explicit Headers and JSON Bodies
Always:
Set Content-Type: application/json when sending JSON data.
Convert maps to JSON using jsonEncode (never concatenate strings).
Capture
This ensures your app handles errors in a predictable and secure manner, rather than accidentally exposing sensitive data.
Secure Storage on Device
I have written a lot about this in But let's have a quick review here too.Avoid using insecure storage solutions like SharedPreferences for sensitive data. SharedPreferences stores data in plaintext, making it vulnerable to extraction. Instead, use the OS key-store via flutter_secure_storage for secure data storage.
When to wipe tokens: If the device is rooted, the token could still be extracted from memory. Wipe the token on every "app background" event and rely on refresh tokens to quickly obtain a new one.
Don’t store tokens in memory or static variables between sessions. Always reload tokens securely from encrypted storage when needed.
Log and Analytics Hygiene
Sensitive data can easily leak through logs. A stray print(response.body) or verbose logging left in production code is an open invitation for a data leak. Here’s how to keep your logging secure:
Remove verbose logging in production using flags like --dart-define=FLUTTER_WEB_LOGS=false or wrap logs inside assert(() { … }()) for development-only logging.
Redact sensitive data in crash reporting services (e.g., Crashlytics, Sentry). Configure hooks to automatically redact tokens, emails, GPS coordinates, or anything personally identifying.
In certain scenarios, especially when regulations or threat models demand that data remain unreadable even on a compromised transport layer, you may want to encrypt the payload itself, in addition to relying on TLS.Here’s an example using the encrypt package for AES encryption:
Share the encryption key securely with your backend. You can use asymmetric encryption to securely exchange keys (public/private key pairs) or share the key out of band.
Use different encryption keys for each user/session, and ensure key rotation occurs regularly.
Avoid hardcoding encryption keys directly in the app binary; store them securely.
Platform‑Level Enforcement & HSTS
Even flawless code can be sabotaged by a stray manifest flag or a forgotten server redirect. Lock the doors at the OS and server layers so your app physically cannot talk over cleartext.
Android — Block Cleartext Everywhere
Recent Android versions disable HTTP by default, but one debug tweak can switch it back on. Add a network‑security config that allows only the domains you own, and only over TLS.res/xml/network_security_config.xml
Point your <application> tag at this file:android:networkSecurityConfig="@xml/network_security_config"Ship a release build, hit any http:// URL, and watch it fail fast—exactly what you want.
iOS — Tighten App Transport Security
iOS’s ATS already blocks cleartext. Make sure no earlier testing flag sneaks into production.ios/Runner/Info.plist snippet
Build a release IPA and run curl inside the app’s WebView or via http.get; it should error instantly.
Server — Tell Browsers “You’re TLS‑Only”
Preloading HSTS ensures even first-time users never hit your API over HTTP. Add the HTTP Strict Transport Security header so any compliant client refuses to downgrade.Nginx example
max-age: one year in seconds
includeSubDomains: catch everything, even
preload: after seven clean days submit to so
Testing, Monitoring & Resilience
Building defenses is only half the game; now we prove they work and keep working.
Automated CI Scans
Spin up OWASP ZAP or Burp Scanner in Docker each pull‑request.
Fail the build on any medium‑ or high‑risk finding.
If you own Burp Enterprise, call its REST API the same way; block merges when new TLS or mixed‑content issues appear.
MITM Simulation on Real Devices
Mitmproxy: start a local proxy, install its root cert on emulator, route traffic through 127.0.0.1:8080.Expected results:
Any http:// request should fail instantly (platform block).
A pinned https:// request should fail handshake because the proxy cert isn’t pinned.
Debug‑only overrides (localhost, self‑signed) should still work.
Frida script (advanced): hook dart:io’s _HttpClient and attempt to replace badCertificateCallback. Your pinning logic should continue to reject the injected cert, proving an on‑device attacker can’t bypass it without heavy lifting.
Timeouts, Retries, Fail‑Closed
Never loop forever on a broken TLS handshake; instead:
If you still can’t connect, surface an error like “Secure connection failed—check Wi‑Fi or VPN” and refuse to downgrade.
Production Monitoring
Ingest mobile TLS failures into your backend logs (status code 0 or handshake exception). A sudden spike may mean your cert expired or a captive portal is blocking traffic.
Monitor pin‑mismatch events separately; if they rise, someone might be MITM‑ing users or your cert rotation went sideways.
Set up an uptime robot that curls your API over HTTP every hour; it must receive a 301/308 redirect or a 403 block. Alert if it ever gets a 200 OK—that means someone accidentally re‑enabled plaintext.
Disaster Drills
Once per quarter flip staging to an invalid certificate and run the app end‑to‑end:
Does the UI show a clear, user‑friendly error?
Does it refrain from retrying in the clear?
Can you ship a hotfix pin update quickly if needed?
When these drills are boring, you’ve done it right: your pipeline, runtime checks, and human processes all treat insecure communication as an outage, not a warning.Next we’ll add final hardening for compromised devices with runtime detection.
Advanced Runtime Protections
Even with TLS, pinning, and platform blocks in place, everything collapses if the app runs on a rooted phone, inside an emulator, or under a live debugger. At that point an attacker can dump memory, switch off pinning at runtime, or extract tokens directly from disk. To close that last gap I add one more guardrail: on‑device tamper detection (, , and more) that shuts down sensitive flows the moment something looks wrong.
Why runtime protections matter
Root / jailbreak removes the OS sandbox; malware can read secure storage or inject hooks into Dart’s TLS stack.
Emulators invite dynamic instrumentation—think Frida scripts that patch badCertificateCallback to always return true.
Repackaged APKs can disable pinning, add spyware, then re‑sign the bundle and trick users into installing it.
One detection library will not beat every attacker, but it raises the cost so high that most move on to softer targets.
Integrating FreeRASP step by step
Add the dependency to pubspec.yaml
Bootstrap as early as possible—before you fetch tokens or open sockets.
Choosing the right response
High‑risk flows (payments, health‑records)
Immediately erase secrets, close sockets, and exit or force re‑authentication.
Medium‑risk flows (chat, productivity)
Switch to read‑only mode, warn the user, and log the incident.
Audit trail
Post a lightweight event (device hash, event type, UTC timestamp) so your SOC can spot trends and decide if a user or region is under attack.
Testing your defense
Rooted emulator: launch the app; _react should fire and secrets must be wiped.
For extra protection, send anonymized metrics to your SOC or backend security dashboard for analysis.
Frida attach: run frida -U -f com.example.app; debugger detection should trigger.
Conclusion & What’s Next
We started this journey in a bustling café, watching an attacker try to intercept our traffic as we sent our "postcard." Ten sections later, our once-vulnerable postcard has transformed into an armored van: HTTPS everywhere, strict certificate validation, pinning for sensitive use cases, secure WebSockets, airtight payload handling, platform policies that block cleartext, automated scans, hands-on MITM drills, and FreeRASP standing guard on compromised devices.
Handle App Security with a Single Solution! Check out Talsec's premium offer:
Key Takeaways:
Encrypt every byte in transit: No http://, no ws://. Always use https:// and wss:// for security.
Trust, but verify: Let Dart’s HttpClient handle certificate validation; only implement pinning when you’re ready to manage the rotation playbook.
With these layers of defense in place, if one layer slips, the others catch it. This defense-in-depth approach is at the heart of OWASP M5 for Flutter and Dart developers.
Up next
In Part Six, we move from securing the wire to securing the binary: Reverse Engineering & Code Protection (M6). We'll crack open a Flutter APK/IPA, show how attackers decompile Dart, inject method swizzles, and siphon hard-coded keys. Then, we’ll teach you how to harden your build so they leave empty-handed.See you there!
On-device malware or tools running on rooted/jailbroken phones
Enterprise or MDM-pushed root CAs that override your app’s trust settings
Certificate Chain: Dart ensures that the certificate is linked to a trusted root authority in your app’s trust store.
Expiry Check: Dart checks if the certificate is expired.
Hostname Matching: Dart ensures that the hostname you requested matches the certificate’s Subject Alternative Name (SAN) or Common Name (CN).
Key Exchange & Encryption: If the certificate passes all checks, Dart and the server exchange session keys and establish an encrypted connection.
Manage certificate pinning to ensure only specific certificates or public keys are trusted.
ship a backup pin
to survive certificate renewals.
When Not to Use: Pinning isn’t required for every app, especially if the server’s certificate is expected to remain stable and not change often. But it’s a strong defensive measure if you need to minimize risk.
.
Proactively rotate pins: Set up calendar reminders or CI/CD hooks to rotate pins before they expire.
Treat handshake failures as fatal—don’t auto-retry insecure WebSocket connections. Never fall back to ws://.
Apply the same timeout and retry backoff logic you use for REST API calls. Just because the WebSocket is open doesn’t mean it’s healthy.
Your app should show a user-friendly error like “Secure connection failed”. Never allow it to silently downgrade to an insecure connection.
response.statusCode
and handle errors like
401
/
403
properly by failing closed instead of silently retrying.
all
browsers hard‑code HTTPS for you.
For bonus points, automate pin mismatch simulation in CI using mocked certs or a TLS-intercepting proxy.
Debuggers allow step‑through inspection of memory, exposing encryption keys and personal data.
Repackaged APK: decompile with Apktool, rebuild, reinstall; app should refuse to run.
False‑positive check: release build on clean hardware—no callbacks should trigger.
Keep secrets secure: Never store sensitive information in URLs, logs, or plain preferences. Use flutter_secure_storage for safe storage.
Test your app like it's in production: Break the build on TLS regressions, and fail loudly on handshake errors.
Assume hostile devices: Implement runtime checks like to halt sensitive flows at the first sign of tampering.
Majid Hajian - Azure & AI advocate, Dart & Flutter community leader, Organizer, author
final uri = Uri.https('api.yourdomain.com', '/profile');
final res = await http.post(uri, headers: {
'Authorization': 'Bearer $token',
'Content-Type': 'application/json',
});
import 'dart:io';
import 'package:shelf/shelf.dart';
import 'package:shelf/shelf_io.dart';
import 'package:shelf_router/shelf_router.dart';
// Configure routes.
final _router =
Router()
..get('/', _rootHandler)
..get('/echo/<message>', _echoHandler);
Response _rootHandler(Request req) {
return Response.ok('Hello, World!\n');
}
Response _echoHandler(Request request) {
final message = request.params['message'];
return Response.ok('$message\n');
}
SecurityContext getSecurityContext() {
// Bind with a secure HTTPS connection
final chain =
Platform.script
.resolve('certificates/localhost.crt')
.toFilePath(); // Point to the localhost cert
final key =
Platform.script
.resolve('certificates/localhost.key')
.toFilePath(); // Point to the localhost key
return SecurityContext()
..useCertificateChain(chain)
..usePrivateKey(
key,
password: 'dartdart',
); // You can set a password or leave it empty if not used
}
void main(List<String> args) async {
// Use localhost for local testing.
final ip =
InternetAddress
.loopbackIPv4; // This ensures the server binds to localhost.
// Configure a pipeline that logs requests.
final _handler = Pipeline().addMiddleware(logRequests()).addHandler(_router);
// Use port 443 for HTTPS.
final port = int.parse(Platform.environment['PORT'] ?? '443');
final server = await serve(
_handler,
ip,
port,
securityContext: getSecurityContext(),
);
print('Server listening on https://localhost:$port');
}
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<false/> <!-- Disable arbitrary loads by default --><!-- Optional dev exception for localhost --><key>NSExceptionDomains</key>
<dict>
<key>localhost</key>
<dict>
<key>NSExceptionAllowsInsecureHTTPLoads</key><true/> <!-- Allow insecure traffic only for localhost --><key>NSIncludesSubdomains</key><true/>
</dict>
</dict>
</dict>
import 'package:http/http.dart' as http;
final res = await http
.get(Uri.https('api.yourdomain.com', '/status'))
.timeout(const Duration(seconds: 10));
print('Status ${res.statusCode}');
import 'package:http/http.dart' as http;
final response = await http.get(Uri.parse('https://api.myapp.com/data'));
// If the certificate is invalid, this throws a handshake exception.
import 'dart:io';
SecurityContext getSecurityContext() {
final context = SecurityContext();
// Load certificate chain (e.g., full chain in PEM format)
context.useCertificateChain('path_to_fullchain.pem');
// Load private key (e.g., for a server)
context.usePrivateKey('path_to_privatekey.pem', password: 'your_password');
return context;
}
# Fetch the leaf certificate from the server
openssl s_client -connect api.yourdomain.com:443 -servername api.yourdomain.com </dev/null \
| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > server_cert.pem
# Generate the public key hash (SPKI)
openssl x509 -in server_cert.pem -noout -pubkey \
| openssl pkey -pubin -outform der \
| openssl dgst -sha256 -binary \
| openssl base64
flutter:assets:- assets/server_cert.pem
import 'dart:io';
import 'package:flutter/services.dart' show rootBundle;
import 'package:http/io_client.dart';
import 'package:http/http.dart' as http;
Future<http.Client> createPinnedClient() async {
final ctx = SecurityContext(withTrustedRoots: false);
final pem = await rootBundle.load('assets/server_cert.pem');
ctx.setTrustedCertificatesBytes(pem.buffer.asUint8List());
final ioClient = HttpClient(context: ctx)
..badCertificateCallback = (_, __, ___) => false; // Reject anything not in ctx
return IOClient(ioClient);
}
Future<void> fetchSecure() async {
final client = await createPinnedClient();
final res = await client.get(Uri.https('api.yourdomain.com', '/data'));
print(res.body);
client.close();
}
import 'dart:convert';
import 'dart:io';
import 'package:crypto/crypto.dart';
import 'package:http/io_client.dart';
const expected = 'AbCdEfGhIjKlMnOp...'; // The Base64 SPKI hash
HttpClient fingerprintClient() {
final c = HttpClient();
c.badCertificateCallback = (X509Certificate cert, String host, int port) {
if (host != 'api.yourdomain.com') return false;
final hash = sha256.convert(cert.der).bytes;
return base64.encode(hash) == expected; // Compare with expected SPKI hash
};
return c;
}
import 'dart:io';
import 'package:web_socket_channel/io.dart';
HttpClient _pinnedClient() {
// Reuse createPinnedClient() from Section 5 or build your own
final client = HttpClient();
client.badCertificateCallback = (cert, host, port) {
// Reject unless host is our server AND fingerprint matches
return false; // Replace with your real certificate check
};
return client;
}
void connectRealtime() {
final channel = IOWebSocketChannel.connect(
Uri.parse('wss://realtime.yourdomain.com/socket'),
customClient: _pinnedClient(), // Enforces pinning and trust rules
protocols: ['json'], // Optional sub‑protocols
);
channel.stream.listen(
(data) => print('Got data: $data'),
onError: (e) => print('WebSocket error: $e'),
onDone: () => print('WebSocket closed with code: ${channel.closeCode}'),
);
// Send a heartbeat to keep the connection alive
channel.sink.add('{"type":"ping"}');
}
// Inside a shelf handler
if (WebSocketTransformer.isUpgradeRequest(request.headers)) {
final socket = await WebSocketTransformer.upgrade(request);
socket.listen((event) {
// Handle incoming data (e.g., JSON)
});
return null; // The request has been hijacked by WebSocket
}
final uri = Uri.parse('https://api.yourdomain.com/user?token=abc123');
await http.get(uri);
final uri = Uri.https('api.yourdomain.com', '/user');
await http.post(
uri,
headers: {
'Authorization': 'Bearer abc123',
'Content-Type': 'application/json',
},
body: jsonEncode({'includeSensitive': true}),
);
Each is a critical piece of the mobile security puzzle.
In this tenth and final article, we focus on .
Let me tell you about the scariest code review I’ve ever done.
A few years ago, I was asked to review a financial app. It handled millions of dollars in transactions. The team used encryption everywhere. They had secure storage. They’d thought about authentication.
But their crypto implementation used DES. The key was hardcoded. It was literally the word password padded to 8 bytes.
DES was considered insecure by the late 90s. . This was in 2021.
The developers weren’t incompetent. They were talented engineers. They simply hadn’t kept up with cryptographic best practices. They copied code from a Stack Overflow answer from 2008. They assumed it was fine.
That’s why this final article exists.
covers:
weak algorithms
poor key management
improper implementations
insecure random number generation
Let’s get into it.
Source code: All code examples from this article are available as a runnable Flutter project on GitHub:
Understanding the Threat Landscape
Before we get into code, it helps to know who breaks cryptographic systems. It also helps to know how.
Who’s Interested in Breaking Your Crypto?
Crypto issues attract motivated attackers. Breaking encryption exposes everything it was meant to protect.
Who
Motivation
Attack methods
How Cryptographic Systems Get Broken
OWASP rates exploitability as AVERAGE. It’s harder than SQL injection. But it’s devastating when it works.
Attackers rarely break modern algorithms directly. They exploit implementation mistakes.
Common examples:
predictable random numbers
reused nonces
hardcoded keys
deprecated algorithms
Common Weaknesses in Flutter Apps
Here are the cryptographic weaknesses I see most often in Flutter apps:
Weak or deprecated algorithms: MD5, SHA-1, DES, 3DES, RC4
Insufficient key lengths: AES-128 when AES-256 is required, RSA < 2048 bits
Hardcoded keys: keys embedded in source
Each one is an opportunity for attackers. Let’s avoid them.
Algorithm Selection Guide
Cryptographic standards exist for a reason. Don’t reinvent this per project.
Symmetric Encryption
Use this for most “encrypt data” cases.
Algorithm
Key size
Use case
Status
My default is AES-256-GCM. It’s fast on modern devices. It’s widely supported. It gives integrity and confidentiality.
ChaCha20-Poly1305 is an excellent alternative. It can outperform AES on devices without AES hardware.
Asymmetric Encryption
Use this for key exchange and digital signatures.
Algorithm
Key size
Use case
Status
For new projects, I lean toward X25519 and Ed25519. RSA-OAEP with 2048+ bits is still fine.
Hashing and Key Derivation
Password hashing is not general-purpose hashing.
Algorithm
Use case
Status
Argon2id, bcrypt, and scrypt are deliberately slow. That’s a feature.
Secure Implementation in Flutter
Setting Up Cryptography Packages
Add these to pubspec.yaml:
The cryptography package is my default. It’s well maintained. It supports modern algorithms.
Symmetric Encryption with AES-GCM
This is a complete example using AES-256-GCM and secure key storage.
Using ChaCha20-Poly1305
Common Mistakes to Avoid
Password Hashing Best Practices
Rules:
Never store passwords in plaintext.
Never use fast hashes for passwords (MD5/SHA-1/SHA-256).
Always use a unique salt per password.
Use Argon2id, bcrypt, or scrypt.
Argon2id (Recommended)
PBKDF2 (When Argon2 Isn’t Available)
OWASP guidance (as of 2023) recommends 310,000+ iterations for PBKDF2-SHA256.
Key Management Best Practices
Crypto is only as strong as your key management.
Key Generation
Never use Random() for security. Use Random.secure() or let the crypto library generate keys.
Key Storage Architecture
Don’t store keys in plaintext preferences or files.
Android: Keystore-backed secure storage
iOS: Keychain-backed secure storage
Digital Signatures
Use signatures to verify authenticity and detect tampering.
Secure Random Number Generation
Random() is for UI and games. It’s not a CSPRNG.
Use Random.secure() for security.
Security Checklist
Use this to audit your crypto.
Cryptographic Implementation Audit
Algorithm selection
☐ Use AES-256-GCM or ChaCha20-Poly1305.
☐ Avoid MD5 / SHA-1 / DES / 3DES.
Quick Reference: Minimum Parameters
AES-GCM: 256-bit key, 96-bit nonce, 128-bit tag
ChaCha20-Poly1305: 256-bit key, 96-bit nonce
RSA: 2048-bit minimum (3072-bit recommended)
Conclusion
Cryptography is powerful. It’s also easy to get wrong.
The difference between secure and insecure crypto is usually details:
Random.secure() vs Random()
AES-GCM vs AES-ECB
Argon2id vs SHA-256 for passwords
If you follow the patterns here and stick to well-tested libraries, you’ll avoid most real-world crypto failures.
Resources
OWASP Top 10 For Flutter – M8: Security Misconfiguration in Flutter & Dart
Welcome back to our deep dive into the . OWASP (Open Worldwide Application Security Project) maintains this to help developers prioritize mobile security efforts.
In earlier parts, we tackled:
Data exfiltration
Key theft, algorithm manipulation
Competitors
Industrial espionage
Decryption of trade secrets
Security researchers
Finding vulnerabilities
Protocol analysis, side-channel attacks
Predictable IVs: reused IVs or sequential IVs
Insecure random generation: using Random() instead of Random.secure()
Missing authenticated encryption: AES-CBC without HMAC
Poor password hashing: SHA-256 instead of bcrypt / Argon2id
Improper key storage: SharedPreferences or plain files
Each is a critical piece of the mobile security puzzle.
In this eighth article, we focus on M8: Security Misconfiguration, a vulnerability that often hides in plain sight. Unlike the complex code vulnerabilities we discussed earlier, misconfigurations are usually simple oversights. A flag left enabled. A permission not restricted. A default setting unchanged. These mistakes are easy to make. They also go unnoticed.
This hits Flutter devs hard. We ship multiple configuration layers at once. Dart code, Android’s AndroidManifest.xml, iOS’s Info.plist, Gradle files, Xcode settings, and more. Each layer adds its own failure modes. A review that only covers Dart misses a big chunk of the attack surface.
I’ve reviewed many Flutter projects. Security misconfigurations are among the most common issues I see. The good news is simple. They’re also among the easiest to fix.
Let’s break down what security misconfiguration means for Flutter apps. Let’s harden your configs.
Source code: All examples from this article are available as a runnable Flutter project on GitHub:
Security misconfiguration occurs when security settings are defined, implemented, or maintained incorrectly. Think of it like leaving your front door unlocked. It’s not a flaw in the lock itself. It’s how you’re using (or not using) the security mechanism.
In mobile apps, security misconfiguration takes many forms:
The following diagram illustrates the various categories of security misconfiguration:
Why Flutter Developers Must Pay Extra Attention
Here's a reality check for Flutter developers: you're responsible for configuring two completely different platforms correctly, plus managing Dart-level configurations. A single misconfiguration on either Android or iOS can compromise your entire app, regardless of how secure your Dart code is.
Let me break down what you’re actually managing:
Configuration layer
Android
iOS
Flutter/Dart
App manifest
AndroidManifest.xml
Info.plist
N/A
Build config
build.gradle
That's a lot of surface area to secure. And here's the thing: most Flutter tutorials and documentation focus on the Dart layer. The platform-specific security configurations are often an afterthought, or worse, copy-pasted from StackOverflow without understanding what they do.
Business and Technical Impact
What can actually go wrong with security misconfigurations? According to OWASP, the consequences include:
Unauthorized data access: Attackers exploit exposed components and read user data.
Account hijacking: Weak session or auth configs let attackers take over accounts.
Data breaches: Backups extracted from unprotected storage end up leaked or sold.
Compliance violations: , , and penalties can be substantial.
Reputation damage: Public disclosure erodes user trust.
So let's get into the specific misconfigurations to watch out for.
Android Security Misconfigurations in Flutter
Android configuration is particularly prone to security issues because of its flexibility and the sheer number of settings available. Let's walk through the most critical misconfigurations I encounter when reviewing Flutter apps.
1. Debug Mode in Production
This one is the classic "oops" moment. Releasing an app with android:debuggable="true" is like shipping a car with the hood permanently unlatched. Anyone can look inside.
What it looks like:
Why this is catastrophic:
When debug mode is enabled, attackers can:
Attach debuggers to your running app and step through code
Inspect and modify memory contents at runtime
Set breakpoints to bypass security checks
Extract sensitive data directly from memory
I've seen production apps with this flag enabled—it happens more often than you'd think, especially when developers manually override build configurations for testing and forget to revert.
The good news: Flutter's build system handles this correctly by default. The debuggable flag is set to true only in debug builds. However, you should still verify your release builds, especially if you've ever modified build configurations manually:
Expected output for a correctly configured release build:
If you see value='true' or debuggable="true", stop and fix your build configuration before shipping.
Explicitly ensure debug is disabled in release builds:
For extra safety, explicitly set the flag in your Gradle build configuration. Belt and suspenders approach:
2. Backup Configuration Leaks
Here's a subtle one that catches many developers off guard. By default, Android backs up your app data to Google Drive. Sounds helpful, right? The problem is that this backup might include sensitive information: authentication tokens, cached user data, and encryption keys stored in SharedPreferences.
The problem in action:
How attackers exploit this:
The following sequence shows how backup extraction works in practice:
An attacker with brief physical access to a device (think: borrowed phone, lost device, or malicious app with ADB access) can extract your entire app's data:
Output:
Nuance on flutter_secure_storage: If you configure it with AndroidOptions(encryptedSharedPreferences: true), the encryption key lives in the Android Keystore and is not included in backups. The extracted XML will contain ciphertext that is useless without the key.
However, when using the default configuration (or older package versions < 5.0), the key material may be stored alongside the data. That makes backup extraction a real threat.
Always enable encryptedSharedPreferences: true on Android.
The fix:
Explicitly disable backups. Or, if you need selective backup for user convenience, carefully exclude sensitive files. See the <application> element docs for the full attribute reference:
For Android 12+ (API 31+), create android/app/src/main/res/xml/data_extraction_rules.xml:
For older Android versions, create android/app/src/main/res/xml/backup_rules.xml:
And reference it:
3. Cleartext Traffic (HTTP)
In 2026, there's really no excuse for allowing unencrypted HTTP traffic. Yet I still see Flutter apps with this misconfiguration regularly, usually because developers enabled it "temporarily" for local testing and forgot to disable it.
The problematic setting:
When this flag is enabled, your app can communicate over unencrypted HTTP. Any attacker on the same network (coffee shop WiFi, compromised router, or hostile network) can see everything your app sends and receives: authentication tokens, personal data, everything.
The fix:
Create a proper network security configuration. This approach gives you fine-grained control and makes your intent explicit. Create android/app/src/main/res/xml/network_security_config.xml:
Reference it in your manifest:
4. Certificate Pinning
If your app communicates with sensitive backend services (especially for financial or health data), consider adding certificate pinning. This prevents attackers from intercepting traffic even if they manage to install a rogue CA (Certificate Authority) certificate on the device.
Here's how to add certificate pinning in your network security config:
Generate pin hash from your server's certificate:
Output:
Use this Base64 string as the <pin> value in your config.
Important: Always include at least two pins (a backup), and set an expiration date. If your certificate rotates and you only have one pin, your app will stop working.
5. Exported Components
This is one of the most frequently overlooked security issues in Android development. Components marked as exported="true" can be accessed by any other app on the device. This includes activities, services, broadcast receivers, and content providers.
Since Android 12 (API 31), the android:exported attribute is required for every component that declares an <intent-filter>. Builds targeting
API 31+ will fail without it.
The problem:
Why this matters for Flutter:
Flutter's main activity is often exported for deep linking. While this is necessary for handling deep links, you need to be extremely careful about validating any data that comes through:
The Fix:
Set exported="false" for internal components
Add permission requirements for exported components
Validate all inputs from intent data
Validate deep link data in Flutter:
The following flow shows how every incoming deep link should be validated before the app acts on it:
Example output when processing deep links:
6. Excessive Permissions
This is one of those areas where I see a lot of "it works, ship it" mentality. Requesting unnecessary permissions increases your app's attack surface and violates the principle of least privilege. Every permission you request is a potential vector for abuse—by malicious code in your dependencies, by attackers who compromise your app, or by you accidentally leaking data you shouldn't have access to.
I've reviewed apps that request camera, contacts, location, SMS, microphone, and storage access... for a simple note-taking app. Each of those permissions opens a door. The more doors you open, the more you have to defend.
The problematic approach:
The better approach:
Only request what you actually need, and request it at runtime with proper explanation. Users have become increasingly permission-aware, they'll uninstall your app if you ask for things that don't make sense.
Request permissions properly in Flutter:
Console output (example flow):
iOS Security Misconfigurations in Flutter
Now let's shift our attention to iOS. While iOS has a reputation for being more secure than Android (and in some ways it is), it's not immune to misconfiguration issues. In fact, some iOS-specific security features are so easy to disable that developers do it without realizing the implications.
1. App Transport Security (ATS)
App Transport Security is Apple's way of enforcing secure network connections. It requires HTTPS by default, with modern TLS versions and cipher suites. It's a fantastic security feature that protects your users from man-in-the-middle attacks.
And yet, one of the most common Stack Overflow answers to "my app can't connect to my server" is "just disable ATS." This is terrible advice that has made its way into countless production apps.
The problematic configuration:
When you add this to your Info.plist, you're telling iOS "I don't care about network security, let me talk to any server over any protocol." Your app can then be tricked into communicating with malicious servers, or have its traffic intercepted on insecure networks.
The secure approach:
If you absolutely must connect to a server that doesn't support HTTPS (and please, pressure them to fix this), use targeted exceptions rather than disabling ATS entirely:
Best practice: Don't disable ATS at all. If you're hitting HTTP endpoints, the right fix is to update those servers to use HTTPS, not to weaken your app's security. Let's Encrypt makes free certificates available to everyone—there's no excuse for HTTP anymore.
2. Missing Privacy Usage Descriptions
iOS requires you to explain why your app needs access to sensitive features—camera, location, contacts, etc. This isn't just a formality; it's a privacy protection that helps users make informed decisions about what they're allowing.
Here's the fun part: if you request a permission without providing the corresponding usage description, your app doesn't show a generic dialog. It crashes. Immediately. And this will definitely get caught in App Store review if you somehow missed it in testing.
What happens without descriptions:
No graceful error handling, no fallback—just a crash. So make sure you add all necessary descriptions to ios/Runner/Info.plist:
3. Insecure Keychain Configuration
The iOS Keychain is one of the most secure places to store sensitive data on the device, but only if you configure it correctly. The Keychain has different accessibility levels that determine when your data can be accessed, and choosing the wrong one can leave your users' data exposed.
The problematic approach (in native code):
Using kSecAttrAccessibleAlways means the data can be read even when the device is locked. If someone steals the phone, they can potentially extract this data without knowing the passcode.
The secure approach in Flutter (using flutter_secure_storage):
The good news is that flutter_secure_storage gives you control over Keychain accessibility. Here's how to configure it properly:
Output:
Understanding Keychain Accessibility Options:
Choosing the right accessibility level requires balancing security with user experience. Here’s a quick breakdown:
Option
When accessible
Synced to iCloud
passcode
Only when unlocked with passcode
No
unlocked
When device is unlocked
Yes
kSecAttrAccessibleAlways and kSecAttrAccessibleAlwaysThisDeviceOnly have been deprecated since iOS 12. Any code still targeting these values will trigger App Store review warnings.
Migrate to first_unlock_this_device or unlocked_this_device.
For most sensitive data like auth tokens, I recommend first_unlock_this_device. This provides strong protection while still allowing background operations (like push notification handling) to access the data after the user has unlocked their device at least once since reboot.
4. URL Scheme Hijacking
Custom URL schemes are a convenient way to open your app from links, but they come with a significant security risk: any app can register the same URL scheme. If a malicious app registers your scheme first (or the user installs a malicious app that does), it can intercept links meant for your app.
The problematic approach:
There's nothing stopping a bad actor from creating an app that also registers the myapp:// scheme. When a user taps a link, iOS might open their malicious app instead of yours, and they could capture OAuth callbacks, payment confirmations, or other sensitive deep link data.
Instead of custom URL schemes, use Universal Links. These are tied to a domain you own and cryptographically verified by Apple. No other app can intercept them.
You'll also need to host an apple-app-site-association file at your domain's root (served over HTTPS, no redirect):
This file must be served at https://yourapp.com/.well-known/apple-app-site-association. Apple will fetch and verify this file when your app is installed, ensuring only your app can handle links to your domain.
Flutter/Dart Level Misconfigurations
Beyond Android and iOS platform configurations, there are security considerations in your Dart code as well. Let's look at some common issues.
1. Debug Mode Detection Bypass
Your app should behave differently in debug versus release mode. Sensitive endpoints, verbose logging, and development shortcuts should never be available in production builds. Here's a pattern I use to manage this:
Output (debug build):
Output (release build):
2. Insecure HTTP Client Configuration
One of the most dangerous patterns I see in Flutter codebases is disabling certificate validation "to make development easier." The problem is, this often gets left in production code, or worse, it's deliberately added to bypass certificate errors without understanding why those errors exist.
Output:
Why badCertificateCallback and not a plugin? Dart's HttpClient gives you direct access to the TLS handshake. For package:http or Dio users, you can pass a custom SecurityContext or HttpClient to get the same effect. The trade-off: native-level pinning (e.g., network_security_config.xml) fires before Dart code runs, so it blocks non-Dart traffic too.
If you're seeing certificate errors, the solution is to fix the server's certificate configuration, not to tell your app to ignore security errors.
3. Insecure SharedPreferences Usage
SharedPreferences is incredibly convenient for storing small bits of data, but it's not encrypted. On a rooted or jailbroken device, anyone can read its contents. I've seen apps store auth tokens, API keys, and even passwords in SharedPreferences. Don't be one of those apps.
Output:
The rule is simple: if you wouldn't want it on the front page of a newspaper, don't put it in SharedPreferences.
4. Logging Sensitive Information
Logging is essential for debugging, but it's also a common source of security leaks. I've seen production apps that logged full user credentials, API keys, and payment information—all visible to anyone with access to device logs.
Debug-mode output:
Even in debug mode, be thoughtful about what you log. Screenshots and screen recordings can capture log output, and you never know where those might end up.
Security Configuration Audit Checklist
Here's a checklist you can use to audit your Flutter app's security configuration. I recommend going through this checklist before every major release, it's saved me from shipping vulnerable code more than once.
Android Configuration
AndroidManifest.xml
android:debuggable is NOT set to true (or is absent)
android:allowBackup="false" or properly configured backup rules
android:usesCleartextTraffic="false"
android:networkSecurityConfig points to secure config
No unnecessary exported="true" on components
Exported components have permission requirements
Only necessary permissions are declared
for sensitive apps (prevents data retention on uninstall)
build.gradle
minifyEnabled true for release builds
shrinkResources true for release builds
Release signing configured properly
debuggable false explicitly set for release
network_security_config.xml
cleartextTrafficPermitted="false" as default
Certificate pinning for sensitive endpoints
No overly broad exceptions
debug-overrides only for debugging
iOS Configuration
Info.plist
No NSAllowsArbitraryLoads set to true
Specific domain exceptions only if necessary
All privacy usage descriptions present and accurate
URL schemes are documented and necessary
Entitlements
Only necessary entitlements enabled
Associated Domains configured for Universal Links
Keychain sharing groups are appropriate
Xcode Settings
Code signing configured properly
Debug builds don't use release certificates
Bitcode settings appropriate for your needs
Dart/Flutter Configuration
Code configuration
No hardcoded secrets or API keys
Proper environment configuration
Secure storage used for sensitive data
HTTP client validates certificates
Deep links are validated before processing
Logging doesn't include PII in release
Debug features disabled in release mode
Dependencies
All packages are from trusted sources
No known vulnerabilities in dependencies
Minimum necessary permissions for plugins
Automated Security Scanning
Manual checklists are great, but humans make mistakes, especially when we're rushing to meet deadlines. That's why I strongly recommend implementing automated security checks in your CI/CD pipeline. Here's a GitHub Actions workflow that catches common misconfigurations before they reach production:
Note: you may adjust the following config a bit to match your project setup!
This workflow won't catch everything, but it will catch the most common and dangerous misconfigurations. I've seen this simple check prevent several "oops, I shipped with debug mode enabled" incidents.
Runtime Configuration Validation
For an extra layer of protection, you can add runtime checks that validate your app's configuration at startup. This is particularly useful for catching issues that might slip through static analysis:
Output (debug, non-production package):
Output (release, production package accidentally in debug):
Conclusion
Security misconfiguration is one of the most common vulnerabilities in mobile apps, and honestly, one of the most frustrating because it's so preventable. Unlike complex vulnerabilities that require deep security expertise to understand, misconfigurations are simply settings that someone forgot to change or didn't realize were insecure.
As Flutter developers, we have the added responsibility of managing configurations across multiple platforms simultaneously. Every time you configure Android's AndroidManifest.xml, you need to remember to check iOS's Info.plist too. Every permission you add to Gradle needs a corresponding entry in Xcode. It's a lot to keep track of.
Here's what I want you to take away from this article:
Disable debug features in production builds. Verify before release.
Disable or restrict backups so attackers can’t extract user data.
Enforce HTTPS everywhere. Add pinning for high-security cases.
Minimize permissions to what you need. Users notice. Attackers too.
Protect exported components with permissions. Validate deep link inputs.
Automate security checks in CI/CD. Don’t rely on manual review.
Audit regularly. Configurations drift as features ship.
The key insight is that a single misconfiguration can undermine all your other security efforts. You can implement perfect encryption, bulletproof authentication, and thorough input validation—but if you leave android:debuggable="true" in your release build, attackers can bypass it all.
Make configuration review a standard part of your development process. Add it to your pull request checklist. Run automated scans on every commit. It's boring work, but it's the kind of boring work that prevents headlines.
In the next article, we'll explore M9: Insecure Data Storage, where we'll dive deep into how to properly store sensitive data on mobile devices. Spoiler alert: it's more nuanced than just "use encrypted storage."
In this sixth article, we focus on M6: Inadequate Privacy Controls, a risk that lurks not in broken code or cracked crypto, but in how we collect, use, and protect user data.
This article isn’t just about avoiding legal trouble; it’s about writing Flutter apps that respect user privacy by design. We’ll start by defining what OWASP means by “Inadequate Privacy Controls,” then dive into practical Dart/Flutter scenarios where privacy can slip: from apps over-collecting personal info, to leaky logs, unchecked third-party SDKs, and more.
Let's get started.
Understanding Privacy in Mobile Apps
What is Privacy in Mobile Apps?
Privacy in mobile apps revolves around protecting PII(Personally Identifiable Information) , data that can identify an individual. This includes sensitive information like:
Names
Addresses
Credit card details
Email addresses
Inadequate privacy controls occur when apps fail to properly collect, store, transmit, or manage this data, making it vulnerable to unauthorized access, misuse, or disclosure. Attackers can exploit these weaknesses to steal identities, misuse payment data, or even blackmail users, leading to breaches of confidentiality, integrity, or availability.
Why Privacy Matters
The consequences of inadequate privacy controls are generally two aspects:
Technical Impact: While direct system damage may be minimal unless PII includes authentication data, manipulated data can disrupt backend systems or render apps unusable.
Business Impact: Privacy breaches can lead to:
Legal Violations: Non-compliance with regulations like GDPR (), CCPA (), PDPA, PIPEDA (Canada), or LGPD (Brazil) can result in hefty fines.
For Flutter developers, prioritizing privacy is not just a technical necessity but a legal and ethical obligation.
Core Privacy Pitfalls in Flutter Apps
Now that we understand the implications of inadequate privacy controls, let’s examine the common scenarios in which Flutter apps can encounter M6 issues.
1. Excessive Data Collection vs. Data Minimization
One of the most pervasive privacy issues is simply collecting too much personal data or more details than necessary. Every extra field or sensor reading is a liability if you don’t need it. Common signs of over-collection in Flutter apps include asking for broad permissions or data your app’s core functionality doesn’t require, or sampling data more frequently than needed.
For example, suppose we have a fitness app that wants to track runs. A bad practice would be to request continuous fine-grained location updates even when the app is in the background, and to collect additional identifiers “just in case”:
A better approach is to apply data minimization and purpose limitation: only collect what you need, when you need it, and at the lowest fidelity that still serves the purpose. For instance, if the app only requires the total distance of a run or an approximate route, it could request location less frequently or with reduced accuracy, and it certainly shouldn’t include static personal identifiers with every data point:
Here we’ve reduced accuracy to “medium” and added a distanceFilter so we only get updates when the user moves 50+ meters. We also avoid attaching the user’s email or device ID to every location update – if the backend needs to tie data to a user, it can often use an opaque user ID or token server-side rather than the app bundling PII in each request. We also check a consent flag (locationSharingConsented) to ensure the user allowed sharing this data (more on consent later).
Data minimization questions: A good rule of thumb is to ask yourself (or your team) a series of questions about every piece of PII your app handles, :
Is each data element truly necessary for the feature to work (e.g., do we really need the user’s birthdate for a fitness app login)?
Can we use a less sensitive or less precise form of the data (e.g., using coarse location or zip code instead of exact GPS coordinates, or anonymizing an ID by hashing it)?
Can we collect data less often or discard it sooner (e.g., update location every few minutes instead of every second, or delete old records after 30 days)?
2. Ignoring Purpose Limitation and User Consent
Closely related to over-collection is the failure to enforce purpose limitation, using data in unexpected ways or without permission. In practice, this often means not honoring users' privacy choices. If your app has a toggle for “Send Anonymous Usage Data” or a user declines a permission, those preferences must be respected in code. Failing to do so isn’t just bad UX; it’s a privacy violation.
Consider an example of an e-commerce Flutter app that includes an analytics SDK. A user, during onboarding, unchecks a box that says “Share usage analytics.” However, the app’s code neglects to disable analytics collection:
Now, let’s correct it. We’ll respect the user’s preference and scrub PII from analytics events. We can disable Firebase Analytics collection entirely until consent is given, and exclude personal details from events:
Beyond analytics, purpose limitation means using data only for what you told the user. If they gave your app access to their contacts to find friends in the app, don’t use those contacts for marketing later. Much of Flutter comes down to developer discipline: keep track of why each permission or piece of data was requested. Document it, and audit your code to ensure you’re not repurposing data elsewhere.
3. Leaking Personal Data
There are different ways that PII can be leaked in a Flutter app. Let's start with the most common one: Logging.
PII in Logging and Error Message
Logging is a double-edged sword. We developers rely on logs, printouts, and crash reports to diagnose issues. But if we’re not careful, those same logs can become a privacy nightmare. Leaky logging is such a common pitfall that OWASP explicitly calls out scenarios where apps inadvertently include PII in logs or error messages. Those logs might end up on a logging server, in someone’s console output, or exposed on a rooted device – all places an attacker or even a curious user could snoop.
A classic example is printing out user data for debugging and forgetting to remove or guard it. Consider this Flutter code that logs a user’s sign-in information:
This code is problematic. It prints the user’s email and even their password (!) to the console. In a debug build, that’s already risky if you share logs; in a release build, print still outputs to device logs (for Android, via Logcat), which can be read by other apps on rooted devices or via adb in many cases. The token is also sensitive. And even in the catch, we log the email again. If this app uses a crash reporting service, those print statements might be collected and sent to a server or shown on a support technician’s dashboard. database exceptions or other errors can accidentally reveal PII, too (for example, an SQL error showing part of a query with user data). So, it’s not just our code but any exception message that could leak info.
Never log sensitive info in production, and sanitize error messages. In Flutter, you have a few strategies:
Use built-in : Under the hood, it wraps prints in if (kDebugMode) { print(...); }. This ensures you don’t execute those logs in release builds.
Even better, use the dart:developerlog() function with appropriate log levels. Unlike print, the log()
Let’s refactor the above login code with these practices:
In a real app, you might integrate with a logging backend or Crashlytics, ensure that what you send doesn’t contain secrets or personal data. Many crash-reporting SDKs let you set user identifiers or attach metadata; if you do, use an opaque ID or a hash instead of plain emails or names.Flutter’s MaterialApp has a debugPrintCallback and other places to intercept logs. Here is an example:
Sanitizing On-Screen Error Messages
Displaying raw server error messages to users can inadvertently expose sensitive internal information such as database schema details, internal IP addresses, stack traces, API keys, or even portions of personal identifiable information (PII) if the error message includes user-specific data. This is a direct information leakage vulnerability.
Before presenting them to the user, you must always intercept, sanitize, and customize error messages. Generic messages are safer and, in most cases, provide a better user experience.
In this example, the _fetchDataWithPotentialError function catches potential HTTP errors. Instead of directly displaying response.body which might contain sensitive details like {"error": "Database query failed for user ID 12345", "details": "SELECT * FROM users WHERE id='12345'"} it provides a generic, user-friendly message while logging the full error for developers to investigate.You can also check the "" article on Talsec.
PII in URL Parameters (OWASP Guidance)
Attaching sensitive data (like email addresses, session tokens, or user IDs) directly to URL query strings (e.g., GET /api/[email protected]) is a significant privacy and security risk.Even if this sounds unrelated to Flutter development, it's still relevant, and if you cannot avoid it, you should inform your team about it.This data can end up in:
Server Access Logs: Web servers typically log the complete URI of every request, including query parameters.
Browser History: If accessed via a webview, the URL with sensitive data could be stored in the device's browser history.
Analytics Referrers: If a user navigates from your app to an external site, the full referrer URL (including query parameters) might be sent to the external site's analytics.
OWASP explicitly states: "Sensitive information should never be transmitted as query parameters."To fix this, transmit sensitive data consistently over HTTPS in the request body (for POST, PUT, and PATCH requests) or in request headers (for authentication tokens, API keys, etc.).Here is a bad example:
But by changing that to the following code, we can ensure we follow best practices:
Always use , http.put, or http.patch with a body for sending sensitive user data, ensure your API endpoints enforce HTTPS for all communications.
Clipboard Leaks
The device's clipboard is a shared resource. Any data your app copies to the clipboard can be read by any other app running on the device that has permission to access the clipboard. This is a significant privacy concern, especially for sensitive information like passwords, OTP codes, credit card numbers, or personal notes. Recent Android and iOS versions have introduced warnings to users when an app reads the clipboard, increasing user awareness and concern about this behavior.
The best practices in this regard are usually:
Avoid Automatic Clipboard Usage: If possible, avoid automatically copying sensitive data to the clipboard.
User Consent/Action: If clipboard copy is necessary (e.g., "Copy OTP"), make it an explicit user action (e.g., a button tap).
Clear Clipboard: For extremely sensitive, short-lived data like OTPs, consider clearing the clipboard programmatically after a short, reasonable interval (e.g., 60 seconds). This prevents the data from lingering indefinitely.
The _checkClipboardContent function in the example is for demonstration purposes only to show what could be read. In a real app, you would not display the raw clipboard content to the user or log it unless it was part of a particular, secure feature.
Static Analysis and Mobile Security Scanners
Even with careful coding, potential data leaks or security misconfigurations can be easy to overlook, especially in larger projects or when multiple developers are involved.Static Analysis (Linters)Flutter projects often use lint rules. You can enable the avoid_print lint in your analysis_options.yaml file.
Why it helps: In release builds, print() statements are not automatically stripped and can still write to the system console (e.g., logcat on Android, on iOS/macOS), which anyone with debugging tools or physical access to the device can access. This means sensitive data logged within release mode is potentially leaked. debugPrint() is preferred as it's throttled and primarily optimized away in release builds.
Mobile Security ScannersFor more in-depth analysis, consider using mobile application security testing () tools, also known as mobile security scanners. These tools can analyze your compiled app binaries (APK for Android, IPA for iOS) to identify potential vulnerabilities, including:
Hardcoded secrets: API keys, passwords, tokens.
Insecure data storage: Unencrypted sensitive data on the device.
Usage of risky APIs: APIs known for privacy concerns (like unencrypted network calls).
There are a few examples of Mobile Security Scanners, including:
An open-source, automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis, and security assessment framework capable of performing static and dynamic analysis. It's highly recommended for its comprehensive feature set.
: A commercial platform offering mobile application security testing, often used for larger organizations or continuous integration.
: Another commercial solution for automated mobile application security analysis.
While these tools are beyond the scope of a simple Flutter project setup, integrating them into your CI/CD pipeline can significantly enhance your app's security posture and help catch issues that static analysis might miss.
4. Storing Sensitive Data Insecurely
If your Flutter app stores personal data on the device, you must treat that data as potentially accessible to attackers. Mobile devices can fall into attackers’ hands physically, or the user might have a rooted/jailbroken phone, or malware might be on the device.
Inadequate privacy control in storage means storing PII in plaintext on disk, failing to encrypt sensitive info, or not using the platform’s secure storage facilities. It can also mean not controlling whether that data gets backed up to the cloud.Let’s illustrate a bad practice: storing a user’s info (say their profile details or auth token) in plain SharedPreferences or a file:
By default, data stored via shared_preferences on Android, it ends up in an XML file in the app’s internal storage, which is sandboxed per app. iOS stores it in NSUserDefaults (also within the app sandbox). While the sandbox offers some isolation, it’s not foolproof; an attacker can read those files on a rooted Android device. Those files might be uploaded to cloud storage if the device is backed up (Android’s auto-backup or iCloud backup on iOS).
The better practice is to use secure storage for sensitive data and explicitly limit what gets backed up. Flutter provides the flutter_secure_storage package, which, under the hood, uses iOS Keychain and Android Keystore to store data that is encrypted at rest.
Encrypting Larger Data
You cannot rely solely on secure enclaves for larger datasets containing PII (e.g., a cached user profile, a collection of sensitive notes, or medical records) due to their size limitations. Instead, you must implement your encryption:
Encryption Algorithms: Use strong, industry-standard encryption algorithms like AES (Advanced Encryption Standard).
Key Management: The encryption key itself needs to be securely stored. This is where flutter_secure_storage comes back into play: you can generate a random AES key and store that key in flutter_secure_storage, then use it to encrypt/decrypt your larger data stored in regular files.
Here is a conceptual example of encrypting and decrypting data:
The goal is never to leave human-readable personal information (PII) or other sensitive data lying around unencrypted on the device's file system or in SharedPreferences.
Backup Concerns (Android)
Android's default auto-backup feature can automatically back up application data to Google Drive for devices that use this service. This includes SharedPreferences and files stored in specific app-specific directories. While convenient for users, it poses a significant privacy risk if sensitive data is unintentionally backed up.
As OWASP M6 Guidance indicates clearly:Explicitly configure what data is included in backups to avoid surprises.
There are two solutions that you might want to follow:
a. Disabling Auto-Backup Entirely (android:allowBackup="false")
The simplest way to prevent sensitive data from being backed up is to disable auto-backup for your entire application.Edit android/app/src/main/AndroidManifest.xml:Locate the <application> tag and add the android:allowBackup="false" attribute:
This is the most straightforward approach for apps handling sensitive data where you don't want any data backed up by Google Drive's auto-backup.
b. Selective Backup (Opting out specific files/directories)
If you need some data to be backed up but want to exclude sensitive files, you can:
Android's NoBackup Directory: Android provides a special directory, accessible via Context.getNoBackupFilesDir(), whose contents are not backed up. Flutter does not directly expose this via path_provider. You would need to use platform channels to access this directory from Dart and then save your files there.
Custom Backup Rules: For more granular control, you can provide a custom android:fullBackupContent="@xml/backup_rules"
This is more complex and generally only needed if you have a mix of sensitive and non-sensitive data that should be backed up. For most security-conscious apps, android:allowBackup="false" is sufficient.
c. android:hasFragileUserData Flag
This manifest attribute (if set to true) tells Android that your app contains sensitive user data. If the user uninstalls the app, the system will offer the user the choice to retain the app's data. This data can then be restored if the app is reinstalled.
Counterintuitive: You might think "fragile" data would be auto-deleted, but the opposite is true: it gives the user the choice to keep data.
Privacy Implications: For sensitive apps, you generally do not want data hanging around after uninstall. If hasFragileUserData is true, and a malicious app with the same package name is later installed (e.g., after the user uninstalls your app), it could potentially claim that leftover data.
Recommendation: For privacy, explicitly set this flag based on your intent.
android:hasFragileUserData="false"(or omit it, as false is often the default)
This tells Android that its data should be removed when the app is uninstalled. This is generally the preferred setting for apps handling sensitive information.
Edit android/app/src/main/AndroidManifest.xml:
In general, for sensitive apps, opt to clean up data on uninstall by either setting android:hasFragileUserData="false" or by relying on the default false behavior if you are also disabling allowBackup.
Backup Concerns (iOS)
On iOS, files stored in your application's Documents directory are backed up to iCloud by default. This is similar to Android's auto-backup and poses a privacy risk for sensitive data.Essentially, the best practices are:
Exclude from Backup: For sensitive files, mark them with the NSURLIsExcludedFromBackupKey attribute. This requires platform-specific Objective-C or Swift code interacting with the iOS file system APIs.
Temporary Directory: Store truly temporary files that don't need to persist across launches or backups in NSTemporaryDirectory(). In Flutter, getTemporaryDirectory() from path_provider maps to this.
Here is a conceptual example (iOS platform-specific code via Platform Channels):
This is a one-time configuration, typically done when setting up your project, but it's crucial for preventing unintentional data leaks through backups.
It's easy to leak user data through third-party SDKs and Flutter plugins unintentionally. These external libraries often collect data you might not be aware of, impacting user privacy and your app's compliance.
Here's a concise guide to managing data exposure from third-party components:
5. Data Exposure via Third-Party SDKs and Plugins
Many plugins wrap native SDKs for features like analytics, crash reporting, advertising, or social login. These SDKs might automatically collect device information, user identifiers, or even sensitive data without your explicit code telling them to.
The common data collectors used in Flutter development are:
Analytics/Crash SDKs (e.g., Firebase, Crashlytics) often collect device model, OS, and app version. Be careful if you set user IDs or if crash logs contain PII.
Advertising SDKs (e.g., AdMob, Facebook Audience Network): Collect device advertising IDs (GAID/IDFA) and potentially location for targeted ads. On iOS, IDFA requires a user prompt; on Android, respect the user's "Limit Ad Tracking" setting.
Social Login SDKs (e.g., Google Sign-in, Facebook Login): Retrieve profile info (name, email) for login. Ensure they don't track usage beyond that.
Treat third-party SDKs as extensions of your app’s privacy surface. Configure them just as carefully as you write your code. The user will hold your app responsible if their data is misused, regardless of whether it was your code or a library. So you must take responsibility for what plugins do. Keep SDKs up-to-date, too; they often release updates to improve privacy (or security). Consider ditching if a certain SDK proves too invasive and has no way to mitigate.
6. Transmitting Personal Data Securely
This overlaps with OWASP but it’s worth briefly mentioning in the privacy context.If you’ve followed the guidance from , your app should already be using HTTPS/TLS for all network calls and avoiding eavesdropping risks. From a privacy standpoint, two specific concerns are: not encrypting sensitive data in transit and sending data to the wrong destination.
The first is straightforward: always use HTTPS for API calls that include PII. Never send info like passwords, tokens, or PII over unsecured channels. If you use WebViews or platform channels, apply the same rule (e.g., if loading a URL with query params, ensure it’s https and free of PII as discussed). If your app transmits extremely sensitive personal data (health records, financial info), consider an extra layer of encryption on the payload in addition to TLS – this is defense-in-depth in case the TLS is terminated somewhere you don’t fully trust. For example, some apps encrypt specific fields with a public key so that only the server can decrypt, even if the data passes through intermediate systems.
The second – sending data to the wrong place – could be as simple as accidentally logging PII to an analytics server when it was meant to go to your secure server, or having a misconfigured endpoint. Always double-check that personal data is only sent to necessary endpoints. This is more of a quality control issue. Still, it has privacy implications if you accidentally send user info to a third party when you intended it for your server.
We won't repeat this because we covered network security in detail in M5. Remember that inadequate privacy controls can manifest as plaintext communication or unintended broadcasts of PII. If you use Bluetooth or other local radios to transmit data (e.g., sending health data to a wearable), ensure those channels are also encrypted and authenticated.
Enhancing Privacy Controls with
As Flutter developers, ensuring user data privacy isn’t just about collecting the minimum necessary information and encrypting it; it’s also about monitoring and responding to potential threats that could compromise privacy in real-time. This is where tools like come into play.
is a powerful tool for detecting and mitigating various types of security threats, including tampering, reverse engineering, debugging, and data leaks, all of which can lead to privacy violations. By integrating freeRASP into your Flutter app, you can proactively detect any suspicious activity that might put your users' data at risk, helping you ensure compliance with privacy regulations like GDPR and CCPA.
Key Privacy Risks Addressed by
freeRASP is designed to monitor various threats that could directly impact user privacy. Below are a few common risks that helps mitigate:
Rooted/Jailbroken Devices: When attackers gain control of the device, they can bypass security measures and access sensitive data. can detect if a device is rooted (Android) or jailbroken (iOS), which is a significant privacy concern.
Debugging and Reverse Engineering: Debuggers and reverse engineering tools (e.g., Frida, Xposed) can manipulate the app’s code and access personal data. detects the presence of such tools in real time.
Tampered Apps: If an attacker modifies the app’s code (repackaging), they can introduce vulnerabilities, such as sending user data to unauthorized third-party servers.
Testing and Maintaining Privacy Controls
Privacy isn’t a one-time setup—it's an ongoing effort. Continuously verify that your app stays aligned with best practices:
Privacy-focused code reviews: For every new feature, ask: Are we collecting new data? Do we need it? How is it stored or logged? Use a checklist like the one in Section 1 and OWASP guidelines.
Automated checks: Enable lint rules (e.g., avoid_print) and write custom linters or unit tests to catch risky patterns. Add mobile security scanners to your CI pipeline to detect insecure storage or excessive permissions.
Dynamic testing: Run your app on rooted emulators to see if sensitive files (e.g.,
Privacy Controls Checklist
To make it simpler to use for your app and team, I have created this simple checklist for Privacy Controls:
Protecting user privacy is a fundamental responsibility for Flutter and Dart developers. Inadequate privacy controls (M6) pose significant risks, from data breaches to legal penalties, but they can be mitigated through careful design and robust security practices. Developers can build Flutter apps prioritizing user trust and safety by minimizing data collection, securing storage and communication, obtaining user consent, and complying with regulations.
As you develop your next Flutter app, keep privacy at the forefront. Use the tools, practices, and checklist provided here to ensure your app meets functional requirements and upholds the highest standards of user privacy. The following article in this series will explore M7, continuing our journey through the OWASP Mobile Top 10 for Flutter.
<!-- BAD: Default allows full backup -->
<application
android:allowBackup="true"> <!-- Data can be extracted via ADB or cloud backup -->
</application>
# Attacker with physical access can extract app data
adb backup -f backup.ab -noapk com.yourapp.app
# Convert and extract
java -jar abe.jar unpack backup.ab backup.tar
tar -xvf backup.tar
# Now attacker has access to SharedPreferences, databases, files
cat apps/com.yourapp.app/sp/FlutterSecureStorage.xml
$ adb backup -f backup.ab -noapk com.yourapp.app
Now unlock your device and confirm the backup operation...
$ java -jar abe.jar unpack backup.ab backup.tar
Strong password not specified. Using empty password.
$ tar -xvf backup.tar
apps/com.yourapp.app/sp/FlutterSecureStorage.xml
apps/com.yourapp.app/db/app_database.db
# Get the certificate
openssl s_client -connect api.yourapp.com:443 < /dev/null 2>/dev/null | \
openssl x509 -outform DER | \
openssl dgst -sha256 -binary | \
openssl enc -base64
K87NG0IfQKa2X5VPz8cz66M5heCTKmMpXs+nDB85Nss=
<!-- BAD: Activity accessible by any app -->
<activity
android:name=".DeepLinkActivity"
android:exported="true"> <!-- Any app can launch this -->
<intent-filter>
<action android:name="android.intent.action.VIEW"/>
<category android:name="android.intent.category.DEFAULT"/>
<data android:scheme="myapp"/>
</intent-filter>
</activity>
<!-- BAD: Content Provider accessible by any app -->
<provider
android:name=".data.MyContentProvider"
android:authorities="com.yourapp.provider"
android:exported="true" <!-- Data can be read by any app -->
android:grantUriPermissions="true"/>
<!-- GOOD: Minimal permissions -->
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<!-- Only if actually needed, with maxSdkVersion where applicable -->
<uses-permission
android:name="android.permission.READ_EXTERNAL_STORAGE"
android:maxSdkVersion="32"/> <!-- Not needed on Android 13+ with scoped storage -->
import 'package:permission_handler/permission_handler.dart';
import 'package:flutter/material.dart';
class PermissionService {
/// Request permission with explanation
Future<bool> requestCameraPermission(BuildContext context) async {
// Check current status
final status = await Permission.camera.status;
if (status.isGranted) {
return true;
}
if (status.isDenied) {
// Show explanation before requesting
final shouldRequest = await _showPermissionExplanation(
context,
title: 'Camera Permission Required',
explanation: 'We need camera access to scan QR codes for secure login. '
'Your camera feed is processed locally and never stored or transmitted.',
icon: Icons.camera_alt,
);
if (!shouldRequest) return false;
final result = await Permission.camera.request();
return result.isGranted;
}
if (status.isPermanentlyDenied) {
// Guide user to settings
await _showSettingsDialog(context, 'camera');
return false;
}
return false;
}
Future<bool> _showPermissionExplanation(
BuildContext context, {
required String title,
required String explanation,
required IconData icon,
}) async {
return await showDialog<bool>(
context: context,
builder: (context) => AlertDialog(
title: Row(
children: [
Icon(icon, color: Theme.of(context).primaryColor),
const SizedBox(width: 8),
Text(title),
],
),
content: Text(explanation),
actions: [
TextButton(
onPressed: () => Navigator.pop(context, false),
child: const Text('Not Now'),
),
ElevatedButton(
onPressed: () => Navigator.pop(context, true),
child: const Text('Continue'),
),
],
),
) ?? false;
}
Future<void> _showSettingsDialog(BuildContext context, String permission) async {
await showDialog(
context: context,
builder: (context) => AlertDialog(
title: const Text('Permission Required'),
content: Text(
'The $permission permission was denied. Please enable it in Settings to use this feature.',
),
actions: [
TextButton(
onPressed: () => Navigator.pop(context),
child: const Text('Cancel'),
),
ElevatedButton(
onPressed: () {
Navigator.pop(context);
openAppSettings();
},
child: const Text('Open Settings'),
),
],
),
);
}
}
[Permission] Camera status: PermissionStatus.denied
[Permission] Showing explanation dialog…
[Permission] User accepted explanation — requesting camera…
[Permission] Camera result: PermissionStatus.granted ✅
<!-- GOOD: ios/Runner/Info.plist -->
<key>NSAppTransportSecurity</key>
<dict>
<!-- Only allow specific exceptions if absolutely necessary -->
<key>NSExceptionDomains</key>
<dict>
<key>legacy-api.example.com</key>
<dict>
<key>NSExceptionAllowsInsecureHTTPLoads</key>
<true/>
<key>NSExceptionMinimumTLSVersion</key>
<string>TLSv1.2</string>
<key>NSIncludesSubdomains</key>
<false/>
</dict>
</dict>
</dict>
// This will crash if NSCameraUsageDescription is missing from Info.plist
await Permission.camera.request();
<!-- Camera -->
<key>NSCameraUsageDescription</key>
<string>We need camera access to scan QR codes for secure login.</string>
<!-- Photo Library -->
<key>NSPhotoLibraryUsageDescription</key>
<string>We need photo library access to let you upload profile pictures.</string>
<!-- Location -->
<key>NSLocationWhenInUseUsageDescription</key>
<string>We need your location to show nearby stores.</string>
<key>NSLocationAlwaysAndWhenInUseUsageDescription</key>
<string>We need background location to send you arrival notifications.</string>
<!-- Microphone -->
<key>NSMicrophoneUsageDescription</key>
<string>We need microphone access for voice messages.</string>
<!-- Contacts -->
<key>NSContactsUsageDescription</key>
<string>We need contacts access to help you find friends.</string>
<!-- Face ID -->
<key>NSFaceIDUsageDescription</key>
<string>We use Face ID for quick and secure login.</string>
<!-- Bluetooth -->
<key>NSBluetoothAlwaysUsageDescription</key>
<string>We use Bluetooth to connect to your fitness tracker.</string>
<key>NSBluetoothPeripheralUsageDescription</key>
<string>We use Bluetooth to connect to your fitness tracker.</string>
// BAD: Data accessible when device is locked
let query: [String: Any] = [
kSecClass: kSecClassGenericPassword,
kSecAttrAccessible: kSecAttrAccessibleAlways, // INSECURE!
kSecAttrAccount: "auth_token",
kSecValueData: tokenData
]
import 'package:flutter_secure_storage/flutter_secure_storage.dart';
class SecureStorageService {
// Configure with appropriate accessibility
final FlutterSecureStorage _storage = const FlutterSecureStorage(
iOptions: IOSOptions(
accessibility: KeychainAccessibility.first_unlock_this_device,
// Only accessible after first unlock, not synced to other devices
),
aOptions: AndroidOptions(
encryptedSharedPreferences: true,
),
);
Future<void> saveAuthToken(String token) async {
await _storage.write(
key: 'auth_token',
value: token,
iOptions: const IOSOptions(
accessibility: KeychainAccessibility.first_unlock_this_device,
),
);
}
Future<String?> getAuthToken() async {
return await _storage.read(key: 'auth_token');
}
Future<void> deleteAuthToken() async {
await _storage.delete(key: 'auth_token');
}
}
import 'package:flutter/foundation.dart';
import 'dart:developer' as developer;
class SecureLogger {
static void log(String message, {Object? error, StackTrace? stackTrace}) {
if (kDebugMode) {
developer.log(message, error: error, stackTrace: stackTrace);
}
// In release mode, send to crash reporting without PII
}
// BAD: Logging sensitive data - NEVER do this
static void logLoginBad(String email, String password) {
debugPrint('Login attempt: $email / $password'); // NEVER DO THIS!
}
// GOOD: Sanitized logging
static void logLoginGood(String email) {
if (kDebugMode) {
// Even in debug, mask part of the email
final maskedEmail = _maskEmail(email);
developer.log('Login attempt: $maskedEmail');
}
}
static String _maskEmail(String email) {
final parts = email.split('@');
if (parts.length != 2) return '***';
final local = parts[0];
final domain = parts[1];
if (local.length <= 2) {
return '***@$domain';
}
return '${local.substring(0, 2)}***@$domain';
}
// GOOD: Never log these - this method exists only as documentation
static void neverLog({
String? password,
String? token,
String? creditCard,
String? ssn,
String? apiKey,
}) {
// These parameters exist only to document what should NEVER be logged
throw UnsupportedError('This method should never be called');
}
}
[log] Login attempt: ma***@example.com
# .github/workflows/security-scan.yml
name: Security Configuration Scan
on: [push, pull_request]
jobs:
android-security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check AndroidManifest.xml
run: |
# Check for debuggable
if grep -q 'android:debuggable="true"' android/app/src/main/AndroidManifest.xml; then
echo "ERROR: debuggable=true found in manifest"
exit 1
fi
# Check for allowBackup
if grep -q 'android:allowBackup="true"' android/app/src/main/AndroidManifest.xml; then
echo "WARNING: allowBackup=true - ensure this is intentional"
fi
# Check for cleartext traffic
if grep -q 'android:usesCleartextTraffic="true"' android/app/src/main/AndroidManifest.xml; then
echo "WARNING: Cleartext traffic allowed"
fi
- name: Check for hardcoded secrets
run: |
# Search for potential secrets in Dart files
if grep -rE "(api_key|apiKey|secret|password)\s*[:=]\s*['\"][^'\"]+['\"]" lib/; then
echo "WARNING: Potential hardcoded secrets found"
exit 1
fi
ios-security:
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
- name: Check Info.plist
run: |
# Check for ATS disabled
if grep -A2 'NSAllowsArbitraryLoads' ios/Runner/Info.plist | grep -q 'true'; then
echo "ERROR: NSAllowsArbitraryLoads is true"
exit 1
fi
- name: Verify entitlements
run: |
# Check entitlements file exists and is properly configured
if [ -f ios/Runner/Runner.entitlements ]; then
echo "Entitlements file found"
cat ios/Runner/Runner.entitlements
fi
import 'dart:io';
import 'package:flutter/foundation.dart';
import 'package:package_info_plus/package_info_plus.dart';
class SecurityConfigValidator {
static Future<List<String>> validateConfiguration() async {
final issues = <String>[];
// Check if running in debug mode in what should be production
if (kDebugMode && await _isProductionEnvironment()) {
issues.add('CRITICAL: Debug mode detected in production environment');
}
// Check for common misconfigurations
if (Platform.isAndroid) {
issues.addAll(await _validateAndroidConfig());
} else if (Platform.isIOS) {
issues.addAll(await _validateIOSConfig());
}
return issues;
}
static Future<bool> _isProductionEnvironment() async {
final packageInfo = await PackageInfo.fromPlatform();
// Check if package name indicates production
return !packageInfo.packageName.contains('.dev') &&
!packageInfo.packageName.contains('.staging');
}
static Future<List<String>> _validateAndroidConfig() async {
final issues = <String>[];
// Add Android-specific runtime checks
// These are limited but can catch some issues
return issues;
}
static Future<List<String>> _validateIOSConfig() async {
final issues = <String>[];
// Add iOS-specific runtime checks
return issues;
}
static void assertSecureConfiguration() {
if (kReleaseMode) {
// In release mode, validate configuration on startup
validateConfiguration().then((issues) {
if (issues.isNotEmpty) {
// Log to crash reporting (without exposing details)
// Consider preventing app from running if critical issues found
for (final issue in issues) {
debugPrint('Security Issue: $issue');
}
}
});
}
}
}
Financial Damage: Lawsuits and penalties can be costly.
Reputational Harm: Loss of user trust can drive users away.
Further Attacks: Stolen PII can fuel social engineering or other malicious activities.
Can we store or transmit an aggregate or pseudonymous value instead (e.g., store that a user is in age group “20-29” instead of storing their full DOB)?
Did the user consent to this data collection and are they aware of how it will be used?
the function can be configured to be a no-op in release mode. Flutters
log()
won’t print to the console in release builds by default, preventing accidental info leakage in production.
Use a logging package that supports levels (like info, warning, error) and configure it to omit info/debug in release. Popular logger packages or other products can do this. At minimum, avoid printing PII at the info level; if something is truly sensitive (passwords, tokens), you should never log it, even in debug. If needed for debugging, log a placeholder like password: ****** or hash it.
Scrub exception messages if they might contain PII. For instance, if you catch an error from a failing API call that includes the request URL and query parameters, consider removing query parameters that might have PII (we’ll talk about avoiding PII in URLs next).
Shared Links: If a user copies and shares a link containing PII from a webview, the PII is leaked.
Proxies/Firewalls: Intermediary devices may log or inspect these parameters.
Sensitive information in logs: Although harder to detect dynamically, some scanners might flag excessive logging or specific patterns.
Enabled debug flags: Identifying if debug features were left enabled in production.
Packages: While you can use Dart's PointyCastle for fine-grained control, packages like encrypt (a more common and user-friendly wrapper) simplify the process.
attribute in your
AndroidManifest.xml
and define an XML file (
res/xml/backup_rules.xml
) that specifies which directories or files to include/exclude.
android:hasFragileUserData="true": Only set this if you have a strong, user-centric reason to allow users to retain data on uninstall (e.g., large game data, extensive user-created content). Ensure users are informed.
UX/Performance Tools (e.g., session replays): Can record user interactions, potentially including sensitive data entered into forms.
protects by detecting changes to the app’s integrity.
Insecure Device Storage: Storing sensitive user data in an insecure manner (e.g., unencrypted) can lead to data leaks, especially if the device is compromised. freeRASP helps ensure that sensitive data is stored securely and inaccessible to unauthorized entities.
Simulators and Emulators: Testing apps on simulators and emulators can sometimes expose sensitive data, as these environments may not be as secure as physical devices. freeRASP detects when the app runs in an emulator, helping prevent exposure during testing.
/shared_prefs
) are protected. Use ADB backups or Auto Backup extractions to check for unintended data exposure.
Network inspection: Use a proxy in a test environment to verify that no PII is sent in plaintext. Test opt-out settings to ensure no data flows when disabled.
Privacy audits: Regularly review what data you collect, why, where it's stored, and who has access. This simplifies privacy policy updates and user data requests.
Dependency vigilance: Monitor package changelogs for changes in data handling. The Flutter ecosystem moves fast—stay informed.
User trust: Be transparent in your privacy policy and UI. Hidden data collection erodes trust.
Threat modeling: Think like an attacker—how could they access user data? Use that insight to fix weak spots in advance.
Each is a critical piece of the mobile security puzzle.
In this ninth article, we focus on M9: Insecure Data Storage.
Let me start with a story that still makes me cringe.
A few years ago, I was doing a security review of a health-tracking app. I discovered that the developers stored complete medical records in a plain JSON file. No encryption. No access controls. Readable by anyone with a few minutes of physical access to the device.
The developers weren’t malicious. They simply didn’t realize that “saving to a file” on mobile isn’t like saving to a server behind a firewall. Mobile devices get lost, stolen, backed up to cloud services, and sometimes compromised by malware. Every piece of data you store becomes a potential liability.
addresses exactly this problem. It’s the improper protection of sensitive data at rest. It’s also one of the most common vulnerabilities in mobile apps.
Flutter’s cross-platform nature adds an extra layer of complexity. When you call SharedPreferences.setString(), do you know where that data ends up on Android versus iOS? Do you know who can access it?
Let’s get into it.
Source code: All code examples from this article are available as a runnable Flutter project on GitHub:
Understanding the Threat Landscape
Before we get to the solutions, I want to give you a clear picture of what you're actually defending against. The threat isn't abstract. There are real people (and automated tools) that go after insecure storage.
Who's After Your Data?
You might be surprised how many different types of attackers care about what your app stores:
Who
Motivation
Attack Method
You might think your app isn't a big enough target to worry about. But here's the thing: automated tools don't discriminate. A script that scans backup files for stored credentials doesn't care whether your app has 1,000 users or 1,000,000.
How Attackers Get Your Data
The exploitability of insecure data storage is rated by OWASP. That's the highest exploitability rating, and it's accurate.
Physical device access: even a few minutes with an unlocked device can be enough. Think lost phones, repair shops, or handing your phone over “just to see the photo”.
Rooted/jailbroken devices: sandboxes don’t help. Malware with privileged access can read your app’s private storage.
Backup extraction: a common blind spot. Backups to iCloud or Google may include app data in plain text.
What Makes Flutter Apps Vulnerable?
The security weaknesses I see most often in Flutter apps fall into predictable patterns:
Flutter Storage Mechanisms: A Security Deep Dive
Before we get to the solutions, it helps to understand exactly what you're dealing with.
Flutter gives you several built-in ways to store data locally, and each one has a completely different security profile.
The bad news? Three of the four most commonly used options are completely insecure for sensitive data.
1. SharedPreferences / NSUserDefaults
SharedPreferences is the go-to solution for storing simple key-value pairs in Flutter.
It's convenient, easy to use, and completely insecure for sensitive data. I see it misused constantly. Developers store auth tokens, API keys, and sometimes even passwords here.
Here's what I mean by misuse:
"But wait," you might say, "isn't app data protected by the OS sandbox?" Technically yes, but let me show you where this data actually ends up.
On Android, SharedPreferences are stored in XML files at:
And the contents look like this:
On iOS, NSUserDefaults are stored in plist files at:
Both locations are easily readable on rooted/jailbroken devices, through backup extraction, or with forensic tools. The sandbox provides no protection against these attack vectors.
The rule is simple: SharedPreferences is for preferences (dark mode, language, onboarding completed), not for secrets.
Example output (what an attacker sees after extraction):
2. Local File Storage
Writing files to the app's document or cache directory is another common pattern. Without encryption, you're essentially creating a readable archive of sensitive data:
I've seen this pattern in production apps handling financial data, medical records, and legal documents. The developers assumed that because the file was in "their app's directory," it was safe. It wasn't.
Example output (plain JSON an attacker reads):
3. SQLite Databases
Flutter apps using sqflite or similar packages often create databases that seem more "serious" than SharedPreferences or JSON files. But without encryption, they're just as vulnerable:
The database file at /data/data/com.example.app/databases/user_data.db can be extracted and opened with any SQLite viewer. I once demonstrated this to a client by pulling their entire user database off a test device in under 30 seconds.
Example output (what an attacker dumps):
4. Application Logs
This one catches a lot of developers off guard. Debug logging that seemed harmless during development persists in production builds and can leak sensitive information:
Example console output (credentials visible to anyone with ADB — Android Debug Bridge):
Platform-Specific Secure Storage
So what should you use instead? The good news is that both Android and iOS provide solid secure storage mechanisms. The key is understanding how to use them properly from Flutter.
Android: The Keystore System
Android’s is the gold standard for secure storage on the platform.
When available (which is most modern devices), it provides hardware-backed cryptographic key storage. This means the encryption keys literally never leave a secure hardware module. Even the operating system can't extract them.
What makes Android Keystore special:
Feature
Description
The important thing to understand is that when you use flutter_secure_storage on Android, your data is encrypted with keys that are protected by this hardware security.
Even if an attacker extracts your encrypted data, they can't decrypt it without access to the secure hardware. That requires physical possession of the specific device.
iOS: Keychain Services
iOS takes a slightly different approach with its . Rather than giving you direct access to encryption keys, the Keychain handles both key management and data storage. You put secrets in, and iOS keeps them encrypted with hardware-protected keys. On devices with a Secure Enclave, cryptographic keys created with the flag never leave that hardware—but note that general Keychain items are protected by the device’s Data Protection keys, not stored directly inside the Enclave.
One of the most important decisions you'll make with iOS Keychain is choosing the right . This determines when your data can be accessed and whether it gets included in backups:
Level
Description
Use Case
For most sensitive data, I recommend AfterFirstUnlockThisDeviceOnly. This provides strong protection while still allowing background operations like push notifications to access the data when needed. The "ThisDeviceOnly" suffix means the data won't be included in iCloud backups, which is important for secrets you don't want floating around in the cloud.
Implementing Secure Storage in Flutter
With the platform picture in mind, here's how to put it to work from your Flutter code.
Using flutter_secure_storage
The package is the standard solution for secure storage in Flutter. It provides a unified API that uses Android Keystore on Android and iOS Keychain on iOS:
Example output:
Secure Database with SQLCipher
For applications that need to store more structured data, plain SQLite isn't enough. The sqflite_sqlcipher package provides transparent database encryption using SQLCipher, which is widely trusted in the security community.
The key insight here is that you need to store the database encryption keys securely—which brings us back to flutter_secure_storage. Here's the pattern I recommend:
With this setup, even if someone extracts the database file, they'll just see encrypted gibberish. The decryption key lives in the hardware-backed secure storage, making the data practically unreadable without access to the specific device.
Example output:
Secure File Storage
Sometimes you need to store larger files, documents, images, or exported data that don't fit well in a database or key-value store. Here's how to handle that securely:
Example output:
Three things in this implementation are worth highlighting:
Unique IV per encryption: A fresh IV is generated for every file.
Reusing the same IV with the same key in AES-GCM breaks security guarantees.
AES-GCM mode: Galois/Counter Mode gives you confidentiality and authenticity in one pass.
Caveat — flash storage and wear leveling: Modern mobile devices use NAND flash (a solid-state memory technology) with wear-leveling controllers. Overwriting a file does not guarantee the original physical blocks are erased.
For the strongest guarantees, rely on (enabled by default on Android 10+ and all iOS devices) so that discarded blocks remain encrypted even if they aren’t zeroed.
Secure Storage Architecture
Individual APIs are one thing, but in a real app you want a single place where all storage decisions live. The rule I follow: your UI and repository layers should never touch encryption keys or platform storage APIs directly. That concern belongs in a dedicated security layer:
Complete Secure Storage Service
Here's what that security layer looks like in practice. It supports three sensitivity tiers—because a user preference and a banking credential shouldn't be stored the same way:
Usage Example: Authentication Service
Preventing Data Leakage
Secure storage protects data at rest, but sensitive information leaks through side channels that most developers don't think about until something goes wrong. I've seen each of the following cause real incidents: debug logs submitted in support tickets, screenshots captured by the OS app switcher, clipboard content harvested by other apps.
1. Secure Logging
A print() call that felt harmless in development quietly persists into your release build and ends up in support logs, ADB output, and crash reports. Here's a logger that makes it structurally difficult to accidentally expose sensitive values:
Output (debug console):
This logger does several important things: it completely disables logging in release mode, and even in debug mode, it automatically redacts patterns that look like tokens, credit cards, SSNs (Social Security Numbers), and passwords. You can expand the pattern matching for your specific use cases.
2. Clipboard Security
When users copy sensitive data (like account numbers or recovery codes), that data sits in the system clipboard where any app can read it. Here's how to auto-clear the clipboard after a short period:
Thirty seconds is usually enough time for users to paste what they copied, but short enough to limit exposure. For extremely sensitive data, you might reduce this to 10–15 seconds.
3. Screenshot Protection
For apps handling sensitive information (banking, health, legal), you should prevent screenshots of sensitive screens. Here's how to implement this with platform channels:
Android Implementation (MainActivity.kt):
On Android, we use the FLAG_SECURE window flag which prevents screenshots and also hides the app content in the recent apps list:
iOS Implementation (AppDelegate.swift):
iOS doesn't have an exact equivalent, but we can show a blur or overlay when the app goes to the background (which is when screenshots are typically captured):
4. Secure Memory Handling
This is an advanced topic, but important for truly sensitive data. When you store a password or key in a Dart string, that string sits in memory until the garbage collector cleans it up, which might be a while. On a compromised device, memory can be dumped and searched for sensitive patterns.
Here's a pattern for handling sensitive data that clears memory as soon as it's no longer needed:
Note that Dart strings are immutable, so we can't truly "clear" them—we work with byte arrays instead. This is a limitation of the language, but using this pattern still reduces the window of exposure.
Example output:
Real Attack Scenarios & Prevention
Abstract advice only gets you so far. Here's what these attacks actually look like when they happen—and exactly what stops each one.
Scenario 1: Backup Extraction Attack
The attack: An attacker gains access to a user's iCloud or Google account (through phishing, password reuse, etc.) and downloads device backups. They extract your app's data from the backup and find authentication tokens stored in plain SharedPreferences.
What happens next: The attacker uses those tokens to access the user's account, potentially stealing personal information, making unauthorized purchases, or worse.
With the secure implementation, even if the attacker gets the backup, the tokens are encrypted with device-specific keys that aren't included in the backup.
Scenario 2: Root/Jailbreak Data Extraction
The attack: A user's device is compromised by malware (perhaps from a malicious app or sideloaded software). The malware has root access and reads files from your app's private directory.
Example output (device security checks):
Scenario 3: Logging Credential Leak
The attack: During a support investigation, someone shares device logs with a third party (or posts them online asking for help). Those logs contain authentication tokens that were printed during debugging and never removed.
This one happens more often than you'd think. I've seen production apps that logged full API responses including auth tokens to the console.
Security Checklist
I run through something like this before every major release. It takes twenty minutes and has caught real issues more than once:
Encryption
Access control
Backup security
Logging
Memory
Root/Jailbreak
Data leakage
Conclusion
If there's one thing I hope you take away from this article, it's that data storage security isn't optional—it's a fundamental responsibility we have to our users. When someone trusts your app with their health records, financial information, or personal details, they're trusting you to protect that data. Insecure storage betrays that trust.
The good news is that Flutter gives us access to excellent platform-level security through packages like flutter_secure_storage. The hardware-backed encryption provided by Android Keystore and iOS Keychain is genuinely difficult to break when used correctly. You just need to use it.
Remember: the effort you put into secure storage pays dividends in user trust, regulatory compliance, and avoiding the nightmare of a data breach disclosure. It's one of those areas where doing it right from the start is much easier than fixing it after a security incident.
In the next article, we'll explore M10: Insufficient Cryptography, where we'll dive into proper cryptographic implementations, key management, and the common crypto mistakes I see in Flutter applications. Because encryption done wrong can be worse than no encryption at all—it gives you a false sense of security.
Resources
Stay secure, and remember: when it comes to user data, paranoia is a feature, not a bug. 🔐
// BAD: Over-collecting precise location and personal info unnecessarily
await Geolocator.requestPermission();
Geolocator.getPositionStream( // continuous high-precision tracking
locationSettings: LocationSettings(accuracy: LocationAccuracy.high)
).listen((Position pos) {
// Sending precise lat/long and device ID on every update
sendToServer({
"lat": pos.latitude,
"lng": pos.longitude,
"deviceId": deviceId, // e.g., device identifier
"userEmail": loggedInUser.email // including email in tracking data
});
});
// GOOD: Minimized data collection (coarser updates, limited identifiers)
await Geolocator.requestPermission();
Geolocator.getPositionStream(
locationSettings: LocationSettings(accuracy: LocationAccuracy.medium, distanceFilter: 50)
).listen((Position pos) {
final locationData = {
"lat": pos.latitude,
"lng": pos.longitude,
// No constant PII like email or deviceId in each payload
};
if (appSettings.locationSharingConsented) {
sendToServer(locationData);
}
});
// BAD: Ignoring user preference for analytics opt-out
FirebaseAnalytics analytics = FirebaseAnalytics.instance;
// ... Later in the app, regardless of user consent:
analytics.logEvent(name: "view_item", parameters: {
"item_id": item.id,
"user_email": user.email, // PII being sent to analytics
});
// GOOD: Honor user opt-out and limit PII in analytics
FirebaseAnalytics analytics = FirebaseAnalytics.instance;
// Disable analytics collection by default (e.g., at app start)
await analytics.setAnalyticsCollectionEnabled(false);
// ... Later, when loading user preferences:
bool consentGiven = await getUserConsentPreference();
await analytics.setAnalyticsCollectionEnabled(consentGiven);
// When logging events, avoid including direct PII
if (consentGiven) {
analytics.logEvent(name: "view_item", parameters: {
"item_id": item.id,
// No email or personal info; use a non-PII user ID if needed
});
}
// BAD: Logging sensitive info in production
void loginUser(String email, String password) async {
print('Logging in user: $email with password: $password'); // Debug log
try {
final token = await api.login(email, password);
print('Auth success, token=$token for $email'); // Another sensitive log
} catch (e, stack) {
print('Login failed for $email: $e'); // Logs email in error
rethrow;
}
}
import 'dart:developer'; // for log()
void loginUser(String email, String password) async {
// Only log non-PII info or use debug mode gating
if (!kReleaseMode) {
log('Attempting login for user: $email'); // Using log() which is safe in release (no output)
}
try {
final token = await api.login(email, password);
log('Auth success for user: $email', level: 800); // level can indicate info/severity
} catch (e, stack) {
// Log only the error type, not the PII, and perhaps send to crash service
log('Login failed for user: $email - ${e.runtimeType}', error: e, level: 1000);
// (Alternatively, report error via Crashlytics without exposing PII in the message)
rethrow;
}
}
import 'package:flutter/material.dart';
import 'dart:developer' as developer;
void main() {
// Option 1: Simple interception and prefixing
debugPrint = (String? message, {int? wrapWidth}) {
developer.log('[APP_LOG] $message', name: 'MyLogger');
};
// Option 2: More advanced interception with a custom callback
// This is the one typically set in MaterialApp's debugPrintCallback
String _myCustomLogBuffer = '';
void myCustomDebugPrintCallback(String? message, {int? wrapWidth}) {
_myCustomLogBuffer += '$message\n';
// You could send this to a remote logging service,
// save to a file, display in a custom console, etc.
print('Intercepted and buffered: $message'); // Still print to console for demonstration
if (_myCustomLogBuffer.length > 500) {
_myCustomLogBuffer = ''; // Clear buffer to prevent excessive memory usage
}
}
runApp(const MyApp(
myCustomDebugPrintCallback: myCustomDebugPrintCallback,
));
}
class MyApp extends StatelessWidget {
final DebugPrintCallback? myCustomDebugPrintCallback;
const MyApp({super.key, this.myCustomDebugPrintCallback});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Log Interception Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
debugPrintCallback: myCustomDebugPrintCallback, // Set the custom callback here
home: const MyHomePage(title: 'Log Interception Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
void initState() {
super.initState();
debugPrint('MyHomePage initState called!');
developer.log('Using developer.log in initState', name: 'MY_APP');
}
@override
Widget build(BuildContext context) {
debugPrint('MyHomePage build called!');
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
ElevatedButton(
onPressed: () {
debugPrint('Button pressed log message!');
print('Using regular print (not intercepted by debugPrintCallback)');
},
child: const Text('Press Me'),
),
ElevatedButton(
onPressed: () {
developer.log('This is a developer.log message!', name: 'ANOTHER_LOGGER');
},
child: const Text('Press for developer.log'),
),
],
),
),
);
}
}
import 'package:flutter/material.dart';
import 'package:http/http.dart' as http;
import 'dart:convert';
class ErrorHandlingDemo extends StatefulWidget {
const ErrorHandlingDemo({super.key});
@override
State<ErrorHandlingDemo> createState() => _ErrorHandlingDemoState();
}
class _ErrorHandlingDemoState extends State<ErrorHandlingDemo> {
String _errorMessage = '';
Future<void> _fetchDataWithPotentialError() async {
setState(() {
_errorMessage = 'Loading...';
});
try {
// Simulate a network request that might return an error
// For demonstration, we'll simulate an internal server error response
final response = await http.get(Uri.parse('https://api.example.com/bad-endpoint'));
if (response.statusCode == 200) {
// Process successful response
setState(() {
_errorMessage = 'Data fetched successfully!';
});
} else if (response.statusCode >= 400) {
// --- BAD PRACTICE: Displaying raw server error ---
// setState(() {
// _errorMessage = 'Server Error: ${response.body}';
// });
// --- GOOD PRACTICE: Sanitize and show generic message ---
String userFriendlyMessage = 'An unexpected error occurred. Please try again later.';
debugPrint('Server returned error status ${response.statusCode}: ${response.body}'); // Log for debugging, not for user display
if (response.statusCode == 401) {
userFriendlyMessage = 'You are not authorized to perform this action.';
} else if (response.statusCode == 404) {
userFriendlyMessage = 'The requested resource was not found.';
} else {
// For 5xx errors or other unhandled 4xx errors
// You might also parse the error body if it's a known structured error
try {
final errorJson = jsonDecode(response.body);
if (errorJson['message'] != null && errorJson['message'] is String) {
userFriendlyMessage = 'Error: ${errorJson['message']}';
}
} catch (e) {
// If parsing fails, stick with the generic message
}
}
setState(() {
_errorMessage = userFriendlyMessage;
});
}
} catch (e) {
// Handle network errors (no internet, connection issues)
setState(() {
_errorMessage = 'Network Error: Could not connect to the server. Please check your internet connection.';
});
debugPrint('Network request failed: $e');
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Error Handling Demo')),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
ElevatedButton(
onPressed: _fetchDataWithPotentialError,
child: const Text('Trigger API Call'),
),
const SizedBox(height: 20),
Text(
_errorMessage,
style: const TextStyle(color: Colors.red, fontSize: 16),
textAlign: TextAlign.center,
),
],
),
),
);
}
}
import 'package:http/http.dart' as http;
Future<void> registerUserBad(String email, String password) async {
// DANGER: PII in URL query parameters!final uri = Uri.parse('https://api.example.com/register?email=$email&password=$password');
try {
final response = await http.get(uri); // Even worse with GET for sensitive dataif (response.statusCode == 200) {
debugPrint('Registration successful (BAD)');
} else {
debugPrint('Registration failed (BAD): ${response.body}');
}
} catch (e) {
debugPrint('Error: $e');
}
}
import 'package:http/http.dart' as http;
import 'dart:convert'; // For jsonEncode
import 'package:flutter/foundation.dart'; // For debugPrint
Future<void> registerUserGood(String email, String password) async {
final uri = Uri.parse('https://api.example.com/register'); // No PII in URL
try {
final response = await http.post(
uri,
headers: {
'Content-Type': 'application/json',
},
body: jsonEncode({ // PII safely in the request body
'email': email,
'password': password,
}),
);
if (response.statusCode == 200) {
debugPrint('Registration successful (GOOD)');
} else {
debugPrint('Registration failed (GOOD): ${response.body}');
}
} catch (e) {
debugPrint('Error: $e');
}
}
// Usage example:
void main() {
runApp(MaterialApp(
home: Scaffold(
body: Center(
child: Column(
children: [
ElevatedButton(
onPressed: () => registerUserBad('[email protected]', 'mysecurepassword'),
child: const Text('Register (BAD: PII in URL)'),
),
ElevatedButton(
onPressed: () => registerUserGood('[email protected]', 'mysecurepassword'),
child: const Text('Register (GOOD: PII in Body)'),
),
],
),
),
),
));
}
import 'package:flutter/material.dart';
import 'package:flutter/services.dart'; // For Clipboard
class ClipboardDemo extends StatefulWidget {
const ClipboardDemo({super.key});
@override
State<ClipboardDemo> createState() => _ClipboardDemoState();
}
class _ClipboardDemoState extends State<ClipboardDemo> {
String _otpCode = '123456'; // Example sensitive data
String _clipboardStatus = 'Clipboard is empty or has other content.';
void _copyOtpAndClear() {
Clipboard.setData(ClipboardData(text: _otpCode));
setState(() {
_clipboardStatus = 'OTP copied to clipboard! Will clear in 10 seconds.';
});
// Schedule clearing the clipboard after 10 seconds
Future.delayed(const Duration(seconds: 10), () {
// Important: Check if the data is still what we put there,
// to avoid clearing something else the user copied.
Clipboard.getData(Clipboard.kTextPlain).then((data) {
if (data?.text == _otpCode) {
Clipboard.setData(const ClipboardData(text: '')); // Clear the clipboard
setState(() {
_clipboardStatus = 'Clipboard cleared.';
});
debugPrint('OTP cleared from clipboard.');
} else {
setState(() {
_clipboardStatus = 'Clipboard content changed. Not cleared by us.';
});
debugPrint('Clipboard content changed, not clearing OTP.');
}
});
});
}
void _checkClipboardContent() async {
final data = await Clipboard.getData(Clipboard.kTextPlain);
if (data != null && data.text != null && data.text!.isNotEmpty) {
// DANGER: Do not display sensitive clipboard content directly!
// This is for demo purposes to show what could be read.
setState(() {
_clipboardStatus = 'Clipboard currently contains: "${data.text}"';
});
debugPrint('Clipboard content: ${data.text}');
} else {
setState(() {
_clipboardStatus = 'Clipboard is empty or has non-text content.';
});
debugPrint('Clipboard empty.');
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Clipboard Security Demo')),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text('Simulated OTP: $_otpCode'),
const SizedBox(height: 20),
ElevatedButton(
onPressed: _copyOtpAndClear,
child: const Text('Copy OTP (and clear after delay)'),
),
const SizedBox(height: 20),
ElevatedButton(
onPressed: _checkClipboardContent,
child: const Text('Check Clipboard Content'),
),
const SizedBox(height: 20),
Padding(
padding: const EdgeInsets.symmetric(horizontal: 20.0),
child: Text(
_clipboardStatus,
textAlign: TextAlign.center,
style: const TextStyle(fontSize: 14),
),
),
],
),
),
);
}
}
# analysis_options.yaml
include: package:flutter_lints/flutter.yaml
linter:
rules:
# Enable the avoid_print rule to flag all uses of print()
avoid_print: true
# You might also consider these for security/privacy
avoid_returning_null_for_future: true # Avoid returning null from Future<T> as it can cause null-dereference.
# You can add other relevant rules based on your project's needs and security policies
# avoid_private_typedef_functions: true # Helps with clearer API boundaries
# no_leading_underscores_for_local_identifiers: true # Can improve readability for local variables
# You might also want to disable rules you find too restrictive, e.g.:
# prefer_const_constructors: false
analyzer:
exclude:
- '**/*.g.dart'
- '**/*.freezed.dart'
- '**/*.gr.dart' # For auto_route
errors:
# Treat `avoid_print` as an error, not just a warning
avoid_print: error
// BAD: Storing PII in plain text preferences
final prefs = await SharedPreferences.getInstance();
await prefs.setString('user_email', user.email);
await prefs.setString('auth_token', user.authToken);
// Also writing a full profile JSON to a file in documents directory
final docsDir = await getApplicationDocumentsDirectory();
File('${docsDir.path}/profile.json').writeAsString(jsonEncode(user.profile));
// GOOD: Using secure storage for sensitive info
final secureStorage = FlutterSecureStorage();
// Store auth token and email securely (encrypted in Keychain/Keystore)
await secureStorage.write(key: 'user_email', value: user.email);
await secureStorage.write(key: 'auth_token', value: user.authToken);
// If we must store profile data, consider encrypting it or marking it no-backup
final docsDir = await getApplicationDocumentsDirectory();
final profileFile = File('${docsDir.path}/profile.json');
// Encrypt the profile JSON before writing (simple example using base64 or custom encryption)
final encryptedProfile = base64Encode(utf8.encode(jsonEncode(user.profile)));
await profileFile.writeAsString(encryptedProfile);
// On Android, exclude this file from backups:
if (Platform.isAndroid) {
await File('${docsDir.path}/profile.json').create(recursive: true);
// Using path_provider, files in getApplicationSupportDirectory are not backed up by default.
// Alternatively, set allowBackup=false in AndroidManifest to disable backups entirely for app.
}
import 'dart:typed_data';
import 'package:encrypt/encrypt.dart';
import 'package:flutter_secure_storage/flutter_secure_storage.dart';
import 'package:path_provider/path_provider.dart';
import 'dart:io';
class DataEncryptionService {
final FlutterSecureStorage _secureStorage = const FlutterSecureStorage();
static const String _encryptionKeyName = 'data_encryption_key';
late Key _encryptionKey; // Our AES encryption key
// Initialize encryption key: either load from secure storage or generate a new one
Future<void> init() async {
String? keyString = await _secureStorage.read(key: _encryptionKeyName);
if (keyString == null) {
// Generate a new AES key (256-bit for AES-256)
_encryptionKey = Key.fromSecureRandom(32);
await _secureStorage.write(key: _encryptionKeyName, value: _encryptionKey.base64);
debugPrint('New encryption key generated and stored securely.');
} else {
_encryptionKey = Key.fromBase64(keyString);
debugPrint('Encryption key loaded from secure storage.');
}
}
// Encrypt data
Encrypted encryptData(String plainText) {
final iv = IV.fromSecureRandom(16); // Initialization Vector for AES
final encrypter = Encrypter(AES(_encryptionKey, mode: AESMode.cbc)); // Using CBC mode
final encrypted = encrypter.encrypt(plainText, iv: iv);
// Combine IV and encrypted data for storage. IV is crucial for decryption.
return Encrypted(Uint8List.fromList(iv.bytes + encrypted.bytes));
}
// Decrypt data
String decryptData(Encrypted encryptedData) {
final ivBytes = encryptedData.bytes.sublist(0, 16); // Extract IV
final encryptedBytes = encryptedData.bytes.sublist(16); // Extract encrypted data
final iv = IV(ivBytes);
final encrypter = Encrypter(AES(_encryptionKey, mode: AESMode.cbc));
return encrypter.decrypt(Encrypted(encryptedBytes), iv: iv);
}
// Example of saving and loading encrypted data to/from a file
Future<void> saveEncryptedToFile(String filename, String data) async {
await init(); // Ensure key is loaded
final encrypted = encryptData(data);
final directory = await getApplicationDocumentsDirectory();
final file = File('${directory.path}/$filename');
await file.writeAsBytes(encrypted.bytes);
debugPrint('Encrypted data saved to ${file.path}');
}
Future<String?> loadEncryptedFromFile(String filename) async {
await init(); // Ensure key is loaded
try {
final directory = await getApplicationDocumentsDirectory();
final file = File('${directory.path}/$filename');
if (!await file.exists()) {
debugPrint('File does not exist: ${file.path}');
return null;
}
final bytes = await file.readAsBytes();
final encrypted = Encrypted(bytes);
final decrypted = decryptData(encrypted);
debugPrint('Decrypted data loaded from ${file.path}');
return decrypted;
} catch (e) {
debugPrint('Error loading/decrypting file: $e');
return null;
}
}
}
// Usage example:
/*
void main() async {
WidgetsFlutterBinding.ensureInitialized(); // Required for path_provider
final encryptionService = DataEncryptionService();
await encryptionService.init(); // Load or generate encryption key
runApp(MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('Large Data Encryption Demo')),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
ElevatedButton(
onPressed: () async {
await encryptionService.saveEncryptedToFile('user_profile.dat', '{"name": "John Doe", "email": "[email protected]", "address": "123 Main St"}');
},
child: const Text('Save Encrypted Profile'),
),
ElevatedButton(
onPressed: () async {
String? profile = await encryptionService.loadEncryptedFromFile('user_profile.dat');
ScaffoldMessenger.of(context).showSnackBar(
SnackBar(content: Text('Loaded Profile: ${profile ?? "N/A"}'))
);
},
child: const Text('Load Encrypted Profile'),
),
],
),
),
),
));
}
*/
// ✅ SECURE: Encrypted SQLite database
import 'package:sqflite_sqlcipher/sqflite.dart';
import 'package:path/path.dart';
import 'package:flutter_secure_storage/flutter_secure_storage.dart';
import 'dart:math';
class SecureDatabase {
Database? _database;
final _secureStorage = const FlutterSecureStorage();
static const _dbKeyStorageKey = 'database_encryption_key';
// Generate or retrieve database encryption key
Future<String> _getDatabaseKey() async {
String? key = await _secureStorage.read(key: _dbKeyStorageKey);
if (key == null) {
// Generate a strong random key
key = _generateSecureKey(32);
await _secureStorage.write(key: _dbKeyStorageKey, value: key);
}
return key;
}
String _generateSecureKey(int length) {
const charset = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#\$%^&*';
final random = Random.secure();
return List.generate(length, (_) => charset[random.nextInt(charset.length)]).join();
}
Future<Database> get database async {
if (_database != null) return _database!;
final dbPath = await getDatabasesPath();
final encryptionKey = await _getDatabaseKey();
_database = await openDatabase(
join(dbPath, 'secure_user_data.db'),
version: 1,
password: encryptionKey, // SQLCipher encryption
onCreate: (db, version) async {
await db.execute('''
CREATE TABLE sensitive_data (
id INTEGER PRIMARY KEY AUTOINCREMENT,
data_type TEXT NOT NULL,
encrypted_value TEXT NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
)
''');
// Create index for faster lookups
await db.execute(
'CREATE INDEX idx_data_type ON sensitive_data(data_type)'
);
},
);
return _database!;
}
Future<void> close() async {
await _database?.close();
_database = null;
}
}
[SecureDB] ✅ Generated encryption key (length: 32).
[SecureDB] ✅ Key stored in flutter_secure_storage (hardware-backed).
[SecureDB] ✅ Database opened with SQLCipher encryption.
[SecureDB] Path: <databases>/secure_user_data.db
[SecureDB] Cipher: AES-256 (via SQLCipher)
[SecureDB] Key storage: database_encryption_key → Keystore/Keychain
[SecureDB]
[SecureDB] Even if extracted, the .db file is unreadable without
[SecureDB] the device-specific hardware-backed key.
// ✅ SECURE: Encrypted file storage
import 'dart:convert';
import 'dart:io';
import 'dart:typed_data';
import 'package:encrypt/encrypt.dart' as encrypt;
import 'package:flutter_secure_storage/flutter_secure_storage.dart';
import 'package:path_provider/path_provider.dart';
import 'dart:math';
class SecureFileStorage {
final _secureStorage = const FlutterSecureStorage();
static const _fileKeyStorageKey = 'file_encryption_key';
Future<encrypt.Key> _getEncryptionKey() async {
String? keyBase64 = await _secureStorage.read(key: _fileKeyStorageKey);
if (keyBase64 == null) {
// Generate a new AES-256 key
final key = encrypt.Key.fromSecureRandom(32);
keyBase64 = key.base64;
await _secureStorage.write(key: _fileKeyStorageKey, value: keyBase64);
return key;
}
return encrypt.Key.fromBase64(keyBase64);
}
/// Encrypt and save data to a file
Future<void> saveEncryptedFile(String filename, String data) async {
final key = await _getEncryptionKey();
// Generate a unique IV for each encryption
final iv = encrypt.IV.fromSecureRandom(16);
// Create encrypter with AES-GCM (authenticated encryption)
final encrypter = encrypt.Encrypter(
encrypt.AES(key, mode: encrypt.AESMode.gcm),
);
// Encrypt the data
final encrypted = encrypter.encrypt(data, iv: iv);
// Combine IV and ciphertext for storage
final combined = {
'iv': iv.base64,
'data': encrypted.base64,
};
// Save to file
final directory = await getApplicationDocumentsDirectory();
final file = File('${directory.path}/$filename.enc');
await file.writeAsString(jsonEncode(combined));
}
/// Read and decrypt data from a file
Future<String?> readEncryptedFile(String filename) async {
try {
final directory = await getApplicationDocumentsDirectory();
final file = File('${directory.path}/$filename.enc');
if (!await file.exists()) return null;
final content = await file.readAsString();
final combined = jsonDecode(content) as Map<String, dynamic>;
final key = await _getEncryptionKey();
final iv = encrypt.IV.fromBase64(combined['iv'] as String);
final encrypter = encrypt.Encrypter(
encrypt.AES(key, mode: encrypt.AESMode.gcm),
);
return encrypter.decrypt64(combined['data'] as String, iv: iv);
} catch (e) {
// Decryption failed: tampered data, wrong key, or corrupted file.
// Avoid logging the error — it may contain ciphertext fragments.
return null;
}
}
/// Securely delete a file (overwrite before deletion)
Future<void> secureDelete(String filename) async {
final directory = await getApplicationDocumentsDirectory();
final file = File('${directory.path}/$filename.enc');
if (await file.exists()) {
// Overwrite with random data before deletion
final length = await file.length();
final random = Random.secure();
final randomData = List.generate(length, (_) => random.nextInt(256));
await file.writeAsBytes(Uint8List.fromList(randomData));
// Now delete
await file.delete();
}
}
}
[SecureFile] ✅ AES-256-GCM encryption:
[SecureFile] Plaintext length : 48 bytes
[SecureFile] IV (base64) : <random-base64>
[SecureFile] Ciphertext (b64) : <random-base64>…
[SecureFile] Key stored in : flutter_secure_storage
[SecureFile]
[SecureFile] File written: <documents>/medical.enc
[SecureFile] Format: { "iv": "<base64>", "data": "<base64>" }
[SecureFile] 🗑️ Secure delete:
[SecureFile] Step 1: Overwrite 2048 bytes with random data
[SecureFile] Step 2: Delete file from filesystem
[SecureFile] ⚠️ Note: flash wear-leveling means original blocks
[SecureFile] may persist — rely on full-disk encryption too.
// ✅ SECURE: Comprehensive secure storage service
import 'dart:convert';
import 'package:flutter/foundation.dart';
import 'package:flutter_secure_storage/flutter_secure_storage.dart';
import 'package:local_auth/local_auth.dart';
enum StorageSensitivity {
/// Standard encryption, no biometrics
standard,
/// Requires biometric authentication
biometric,
/// Highest security: biometrics + device-only storage
critical,
}
class SecureStorageService {
final FlutterSecureStorage _standardStorage;
final FlutterSecureStorage _biometricStorage;
final FlutterSecureStorage _criticalStorage;
final LocalAuthentication _localAuth;
SecureStorageService()
: _standardStorage = const FlutterSecureStorage(
aOptions: AndroidOptions(
encryptedSharedPreferences: true,
),
iOptions: IOSOptions(
accessibility: KeychainAccessibility.first_unlock,
),
),
_biometricStorage = FlutterSecureStorage(
aOptions: AndroidOptions.biometric(
enforceBiometrics: true,
biometricPromptTitle: 'Verify your identity',
),
iOptions: const IOSOptions(
accessibility: KeychainAccessibility.unlocked,
),
),
_criticalStorage = FlutterSecureStorage(
aOptions: AndroidOptions.biometric(
enforceBiometrics: true,
biometricPromptTitle: 'High security verification required',
),
iOptions: const IOSOptions(
accessibility: KeychainAccessibility.passcode,
synchronizable: false, // Don't sync to iCloud
),
),
_localAuth = LocalAuthentication();
FlutterSecureStorage _getStorage(StorageSensitivity sensitivity) {
switch (sensitivity) {
case StorageSensitivity.standard:
return _standardStorage;
case StorageSensitivity.biometric:
return _biometricStorage;
case StorageSensitivity.critical:
return _criticalStorage;
}
}
/// Store a string value securely
Future<void> write({
required String key,
required String value,
StorageSensitivity sensitivity = StorageSensitivity.standard,
}) async {
final storage = _getStorage(sensitivity);
await storage.write(key: key, value: value);
}
/// Read a string value
Future<String?> read({
required String key,
StorageSensitivity sensitivity = StorageSensitivity.standard,
}) async {
final storage = _getStorage(sensitivity);
return await storage.read(key: key);
}
/// Store a complex object as JSON
Future<void> writeObject<T>({
required String key,
required T value,
required Map<String, dynamic> Function(T) toJson,
StorageSensitivity sensitivity = StorageSensitivity.standard,
}) async {
final jsonString = jsonEncode(toJson(value));
await write(key: key, value: jsonString, sensitivity: sensitivity);
}
/// Read a complex object from JSON
Future<T?> readObject<T>({
required String key,
required T Function(Map<String, dynamic>) fromJson,
StorageSensitivity sensitivity = StorageSensitivity.standard,
}) async {
final jsonString = await read(key: key, sensitivity: sensitivity);
if (jsonString == null) return null;
try {
final json = jsonDecode(jsonString) as Map<String, dynamic>;
return fromJson(json);
} catch (e) {
debugPrint('Failed to parse stored object: $e');
return null;
}
}
/// Delete a specific key
Future<void> delete({
required String key,
StorageSensitivity sensitivity = StorageSensitivity.standard,
}) async {
final storage = _getStorage(sensitivity);
await storage.delete(key: key);
}
/// Check if biometrics are available
Future<bool> canUseBiometrics() async {
try {
final isAvailable = await _localAuth.canCheckBiometrics;
final isDeviceSupported = await _localAuth.isDeviceSupported();
return isAvailable && isDeviceSupported;
} catch (e) {
return false;
}
}
/// Clear all secure storage
Future<void> clearAll() async {
await Future.wait([
_standardStorage.deleteAll(),
_biometricStorage.deleteAll(),
_criticalStorage.deleteAll(),
]);
}
}
// ✅ SECURE: Clear sensitive data from memory
import 'dart:typed_data';
class SecureMemory {
/// Securely clear a byte array by overwriting with zeros
static Uint8List clearBytes(Uint8List data) {
for (var i = 0; i < data.length; i++) {
data[i] = 0;
}
return data;
}
/// Create a secure string wrapper that clears on dispose
static SecureString createSecureString(String value) {
return SecureString(value);
}
}
class SecureString {
late Uint8List _data;
bool _isCleared = false;
SecureString(String value) {
_data = Uint8List.fromList(value.codeUnits);
}
String get value {
if (_isCleared) {
throw StateError('SecureString has been cleared');
}
return String.fromCharCodes(_data);
}
void clear() {
if (!_isCleared) {
SecureMemory.clearBytes(_data);
_isCleared = true;
}
}
}
// Usage in authentication
class LoginService {
Future<void> login(String email, SecureString password) async {
try {
await _api.login(email, password.value);
} finally {
// Always clear password from memory
password.clear();
}
}
}
[SecureMemory] Created SecureString: [SecureString(13 bytes)]
[SecureMemory] Value accessible: 13 chars
[SecureMemory] (simulating API login call…)
[SecureMemory] After clear(): [CLEARED]
[SecureMemory] ✅ Access denied: Bad state: SecureString has been cleared
// ❌ VULNERABLE: Data included in backups
class VulnerableStorage {
Future<void> saveTokens(String accessToken, String refreshToken) async {
final prefs = await SharedPreferences.getInstance();
await prefs.setString('access_token', accessToken);
await prefs.setString('refresh_token', refreshToken);
// These will be included in device backups!
}
}
// ✅ SECURE: Exclude from backups
class SecureStorage {
final _storage = const FlutterSecureStorage(
aOptions: AndroidOptions(
encryptedSharedPreferences: true,
),
iOptions: IOSOptions(
accessibility: KeychainAccessibility.passcode,
synchronizable: false, // Exclude from iCloud sync
),
);
Future<void> saveTokens(String accessToken, String refreshToken) async {
await _storage.write(key: 'access_token', value: accessToken);
await _storage.write(key: 'refresh_token', value: refreshToken);
}
}
// ✅ SECURE: Detect compromised device and respond appropriately
// Uses freeRASP — callback names may differ across versions.
// See https://pub.dev/packages/freerasp for the latest API.
import 'package:freerasp/freerasp.dart';
import 'package:flutter_secure_storage/flutter_secure_storage.dart';
class SecurityService {
bool _isDeviceSecure = true;
Future<void> initialize() async {
final config = TalsecConfig(
androidConfig: AndroidConfig(
packageName: 'com.example.app',
signingCertHashes: ['your_cert_hash'],
supportedStores: ['com.sec.android.app.samsungapps'],
),
iosConfig: IOSConfig(
bundleIds: ['com.example.app'],
teamId: 'YOUR_TEAM_ID',
),
watcherMail: '[email protected]',
);
// Register callbacks before calling start()
final callback = ThreatCallback(
onPrivilegedAccess: _onRootOrJailbreakDetected,
onAppIntegrity: _onTamperDetected,
// Provide no-op handlers for threats you only want to monitor
onDebug: () {},
onSimulator: () {},
onUnofficialStore: () {},
onHooks: () {},
onDeviceBinding: () {},
onObfuscationIssues: () {},
);
Talsec.instance.attachListener(callback);
await Talsec.instance.start(config);
}
void _onRootOrJailbreakDetected() {
_isDeviceSecure = false;
_clearSensitiveData();
_showSecurityWarning();
}
void _onTamperDetected() {
_isDeviceSecure = false;
_clearSensitiveData();
_blockAccess();
}
Future<void> _clearSensitiveData() async {
final storage = const FlutterSecureStorage();
await storage.deleteAll();
}
void _showSecurityWarning() {
// Alert user about compromised device
}
void _blockAccess() {
// Prevent access to sensitive features
}
bool get canAccessSensitiveData => _isDeviceSecure;
}
[Security] Running device security checks…
[Security] ✅ Root/jailbreak: not detected
[Security] ✅ App integrity: valid
[Security] ✅ Debug mode: not attached
[Security] ✅ Emulator: not detected
[Security] Device is secure — sensitive features enabled.
[Security] Simulating root/jailbreak detection…
[Security] 🚨 Root/jailbreak DETECTED!
[Security] 🗑️ Sensitive data cleared from secure storage.
[Security] 🔒 Sensitive features BLOCKED.
[Security] canAccessSensitiveData = false
// ❌ VULNERABLE: Logging sensitive data
class VulnerableAuthService {
Future<void> login(String email, String password) async {
print('Attempting login for $email with password $password');
final response = await _api.login(email, password);
print('Got token: ${response.token}');
}
}
// ✅ SECURE: No sensitive data in logs
class SecureAuthService {
final _logger = SecureLogger();
Future<void> login(String email, String password) async {
_logger.info('Attempting login for user');
final response = await _api.login(email, password);
_logger.info('Login successful');
// Never log tokens or credentials
}
}
class SecureLogger {
void info(String message) {
if (!kReleaseMode) {
developer.log('[INFO] $message');
}
}
void error(String message, [Object? error]) {
if (!kReleaseMode) {
developer.log('[ERROR] $message', error: error);
}
// In production, send to secure error tracking (without PII)
}
}
OWASP Top 10 For Flutter – M3: Insecure Authentication and Authorization in Flutter
Welcome back to our series on the OWASP Mobile Top 10 for Flutter developers. We’ve already explored M1: Mastering Credential Security in Flutter and M2: Inadequate Supply Chain Security. Now, we dive into M3: Insecure Authentication and Authorization, a classic yet devastating threat that can quietly unravel even the most polished Flutter apps.In this post, we’ll explain the difference between these two core security pillars and explore how they are implemented (or misimplemented) in Flutter apps while weaving in guidance from OWASP’s Mobile Application Security Verification Standard (MASVS) and real-world attack models
What Is M3, and Why Should Flutter Developers Care?
When building a Flutter app, it's easy to get excited about crafting beautiful interfaces and smooth animations. But beneath the surface of those seamless user experiences lies something far more critical: authentication and authorization. These two processes aren't just technical terminology; they're the protection of your users' identities and the gatekeepers of your data.Authentication ensures users are genuinely who they claim to be, while authorization determines precisely what each authenticated user can and cannot do within your app. Think of authentication as checking IDs at the front door and authorization as ensuring guests don't wander into restricted rooms once inside.
But what happens when these essential controls are weak or poorly implemented? Unfortunately, the consequences are severe and all too common:
Malicious users can effortlessly impersonate others, assuming their identities to access sensitive information.
Attackers may elevate their privileges, gaining admin-like power to manipulate your app in ways never intended.
Confidential user data—from financial details to private messages—could become open to unauthorized eyes.
Flutter apps, despite their convenience and rapid development cycles, often fall victim to these vulnerabilities because developers may inadvertently make mistakes like:
Storing authentication tokens insecurely makes them easy prey on compromised devices.
Relying solely on local checks, believing users won’t reverse-engineer or manipulate local logic.
Neglecting strong, consistent server-side verification, thus leaving gaps that attackers can exploit.
Adding fuel to the fire, mobile apps frequently require offline functionality, leading developers to handle authentication and authorization locally. This is convenient but also risky. Attackers have near-total control over rooted or jailbroken devices, meaning client-side security alone isn't enough.
Let me show you a typical authorization flow in Flutter:
We will go to each part of this in the following sections. Let's explore.
OWASP’s Perspective to See the Bigger Picture
To put these threats into perspective, OWASP classifies insecure authentication and authorization (M3) as one of the most critical issues facing mobile apps. Attackers targeting M3 vulnerabilities often use automated tools, custom scripts, or malicious software on rooted devices. They bypass client-side protections by directly communicating with backend services, forging user roles, or exploiting hidden API endpoints.For Flutter developers, understanding this bigger picture means recognizing that threats aren't theoretical; they’re driven by real attackers who exploit predictable patterns like weak credential policies, insecure token storage, or insufficient server-side checks. Ignoring these security measures doesn't just risk your data; it can lead to severe legal, financial, and reputational damage.By fully grasping the OWASP threat model, you'll build stronger, more resilient authentication and authorization systems, protecting your Flutter applications from common yet devastating attacks.
The Importance of Server-Side Validation
While client-side validation is essential for enhancing user experience and catching errors early, it cannot be trusted as the sole line of defense. Client-side checks can be bypassed, manipulated, or entirely disabled by attackers using reverse engineering techniques or network interception. This is why server-side validation is non-negotiable when it comes to securing your Flutter app.
Key Reasons to Prioritize Server-Side Validation
Bypass Vulnerabilities: Attackers can modify client-side code or intercept requests, rendering any client-side validation ineffective. Server-side checks ensure that every request is verified on a trusted environment.
Consistent Enforcement: Server-side validation provides a centralized enforcement point for critical security rules, such as strong password policies, token verification, and role-based access controls. This reduces inconsistencies that might arise when multiple clients handle validation.
Mitigation of Automated Attacks: With server-side mechanisms like rate limiting, account lockouts, and detailed logging, you can better detect and mitigate brute-force or credential stuffing attacks that bypass client-side measures.
Best Practices for Implementing Server-Side Validation
Enforce Data Integrity: Always verify user input, tokens, and permissions on the server, regardless of the client-side checks performed.
Utilize Secure Tokens: Implement short-lived access tokens and robust refresh mechanisms. Validate these tokens on every request to ensure that the session remains secure.
Role and Permission Checks: Never trust client-supplied data for critical decisions like role assignments. Use server-side logic to confirm the user's privileges before executing any sensitive operation.
Remember: The principle of "defense in depth" means that every layer of your app, from the client to the server, must work together to ensure robust security.
Common Authentication Vulnerabilities in Flutter
Let’s walk through these common pitfalls together, exploring what typically goes wrong and how easily attackers can exploit these vulnerabilities. We’ll then examine robust solutions so you can confidently protect your apps from these risks.
Weak Password Policies
One of the simplest yet most overlooked vulnerabilities is weak password enforcement. At first glance, handling password inputs might seem straightforward. You create a simple registration form in Flutter, add a password field, and ensure users can't leave it blank. Your form might look something like this:
This snippet feels harmless; after all, you're verifying that users at least provide something. But consider this: attackers armed with automated tools can attempt thousands of common passwords in just a few minutes. With no complexity checks, no minimum-length enforcement, and no server-side protection, you've unintentionally provided them with a golden opportunity.Attackers thrive in these environments, efficiently bypassing client-side validations or intercepting requests to test countless weak passwords, such as "123456," "password," or "qwerty."
Why Does This Happen So Often?
Flutter developers sometimes mistakenly assume client-side validation is sufficient. However, attackers can effortlessly bypass local validation by manipulating the app or intercepting network requests.
As detailed in our 'Server-Side Validation' section, client-side password checks are only the first line of defense.
Weak passwords or predictable patterns make credential stuffing and brute-force attacks effective and widespread.
How to Secure Password Handling
Implementing strong password security is straightforward, but it requires diligence and consistent enforcement:
Always validate passwords on the backend rather than relying on client-side checks alone.
Enforce complexity rules:
Require passwords to be at least 8–12 characters.
Insecure Token Storage
We have talked about this in previous articles, too.Think of authentication tokens as master keys that open doors to your application’s sensitive areas. When users log in, your Flutter app typically receives a token, often a JWT, that proves their identity for future interactions. But what happens if these critical tokens aren't stored securely?Imagine you decide to store tokens using Flutter’s convenient SharedPreferences:
Looks simple, right? Unfortunately, convenience here comes with significant risk. On a rooted Android device, an attacker can easily navigate to:
There, your neatly stored tokens sit completely unencrypted, like spare house keys under a welcome mat. Attackers won't even need specialized tools—these tokens are accessible and readable in plain text, ready for misuse.
Why Does This Happen?
Flutter’s SharedPreferences is great for quickly storing user preferences, but it's never meant to handle sensitive data.
Remember, as highlighted in our 'Server-Side Validation' section, local storage should never be the only safeguard.
Developers often favor convenience and session persistence without fully recognizing the security implications.
Why Does This Happen?
Flutter’s SharedPreferences is great for quickly storing user preferences, but it's never meant to handle sensitive data.
Remember, as highlighted in our 'Server-Side Validation' section, local storage should never be the only safeguard.
Developers often favor convenience and session persistence without fully recognizing the security implications.
How to Securely Store Tokens
Fortunately, Flutter offers safer alternatives designed specifically for sensitive data. The best practice is to use flutter_secure_storage, which leverages Android's hardware-backed Keystore and iOS's secure Keychain. Here's how easily you can implement this:
Missing Multi-Factor Authentication (MFA)
Passwords alone are rarely strong enough, especially in mobile apps where convenience wins out over robust security. Users commonly select easy-to-remember passwords or reuse them across multiple platforms, dramatically increasing the risk of compromise. If your Flutter app relies solely on passwords, you leave a single weak point between your users and attackers.Imagine a banking app built in Flutter that requires only a username and password to log in, no OTP verification, no biometric checks, and nothing additional. If those credentials get leaked (and often do), an attacker can stroll through your security.
Why Does This Happen?
Passwords are easily compromised: Users frequently reuse them across sites, increasing the likelihood of being exposed to a breach.
Phishing and social engineering: Attackers constantly attempt to trick users into giving away credentials.
SIM-swap attacks: Even if SMS-based MFA is used, attackers might intercept messages, making simple SMS-based verification inadequate.
Without MFA, every leaked or phished password represents an immediate risk of complete account takeover.
Best Practices for Implementing MFA
Implementing Multi-Factor Authentication is the most effective way to protect your users and your app. MFA provides additional security layers beyond the password, significantly limiting damage from leaked credentials.Here’s how you can integrate robust MFA into your Flutter apps:
TOTP-based authentication: Use authenticator apps (like Google Authenticator or Authy) to generate unique, time-limited codes for each login.
Push-based notifications: Prompt users to approve logins through notifications on their trusted devices.
Biometric authentication: Utilize fingerprints or facial recognition as a secure fallback, especially for sensitive actions.
Implementing MFA using trusted third-party providers or custom backend logic dramatically enhances your security posture. It ensures that even if passwords are compromised, attackers face substantial barriers preventing unauthorized access.
Biometric Authentication Issues
Biometric authentication, like fingerprints or facial recognition, offers impressive convenience and is often perceived as highly secure. However, when biometrics are misused or incorrectly implemented, they can create a dangerous illusion of security.Consider a Flutter-based note-taking app that secures sensitive notes with a fingerprint scan, utilizing Flutter’s local_auth package.The implementation might look something like this:
This seems robust, right? Unfortunately, because this check occurs entirely on the client side, it's vulnerable to manipulation. An attacker with access to a rooted device can easily bypass or entirely fake the biometric verification by modifying the app, granting themselves unrestricted access to protected notes.
Why Does This Happen?
Purely local checks: Without verifying biometric success on the server-side, the local-only validation can easily be bypassed.
No secure fallback: The absence of an alternative verification (like a PIN or password) leaves your app vulnerable if biometrics are compromised or unsupported.
Missing session-based validation: If biometrics directly unlock sensitive content without validating sessions or tokens, the security of your data depends entirely on local security.
Best Practices for Secure Biometric Integration
To leverage biometrics safely in your Flutter applications:
Use biometrics to unlock securely stored tokens, not directly to grant immediate data access.
As outlined in our 'Server-Side Validation' section, biometrics should only serve as an initial step to unlock secure tokens. Always combine them with server-side validations to prevent bypass attempts.
Provide secure fallback methods (such as PIN or password) for devices without biometric support or in cases of biometric failure.
Following these guidelines will significantly enhance security, transforming biometrics from a misleading comfort to a genuinely robust protective measure.
Poor Session Management
Session management is the quiet guardian behind your app's security. Unfortunately, it's often overlooked—leading to tokens that never expire, missing logout functionality, and the absence of proper token-refresh logic. Such oversights can severely weaken your app’s security posture.Imagine a Flutter app designed to keep users conveniently logged in indefinitely.On the surface, users might appreciate the seamless experience. But what if their device gets lost, stolen, or compromised? Without a proper timeout or refresh strategy, the attacker instantly inherits an endless session, gaining continuous access to sensitive user data.
Why Does This Happen?
Long-lived tokens: Tokens with no expiration date or excessively long lifetimes significantly increase risk if they're ever compromised.
Lack of automatic mitigation: Without token expiration, there's no built-in mechanism to reduce damage or automatically revoke access.
Simplified session hijacking: Attackers find it easier to hijack sessions when tokens never expire or there is no effective logout procedure.
Best Practices for Secure Session Management
To protect your users and secure their sessions effectively:
Issue short-lived access tokens (typically around 15 minutes), significantly reducing the exposure window if compromised.
Utilize secure refresh tokens, stored safely using flutter_secure_storage, to renew access seamlessly yet securely.
Consistently implement a robust logout mechanism that clears tokens both locally and server-side, ensuring no lingering sessions remain active after logout.
Bypassing Authentication Controls
Sometimes, the most dangerous vulnerabilities are the ones you didn't even realize existed. It's easy to assume that if a feature isn't visible or directly accessible from your app’s UI, users won't find or exploit it—but attackers frequently prove this assumption wrong.
Consider this real-world scenario: A Flutter-based health application had an internal testing endpoint at /test-patient-info. It was designed to simplify QA processes by quickly fetching sensitive patient data. Unfortunately, developers forgot to secure this endpoint properly before launching to production:
Without requiring an authentication token or performing authorization checks, this seemingly hidden endpoint quietly exposed sensitive patient information to anyone who knew where to look.
Why Does This Happen?
Developers mistakenly assume that users will never discover or exploit specific endpoints, particularly those meant for internal QA or debugging.
Forgotten test routes remain active, silently waiting to be exploited in production.
UI-level gating, such as hiding buttons or options from the user interface, is incorrectly treated as adequate security, even though attackers frequently bypass client-side controls.
Best Practices for Securing All Endpoints
To prevent attackers from exploiting hidden or forgotten endpoints:
As discussed in our 'Server-Side Validation' section, relying solely on client-side restrictions, like hiding endpoints, is risky.
Conduct a thorough audit of all available routes and endpoints before releasing your app to production.
Remove or fully disable test or debug endpoints in your release builds, minimizing unnecessary attack surfaces.
Common Authorization Vulnerabilities in Flutter
Unfortunately, many Flutter developers fall into common authorization pitfalls even when authentication is done right. A frequent misconception is that if an option or endpoint is hidden from the UI, users won't find or misuse it.This assumption fails to recognize how easily attackers can reverse-engineer apps or intercept network calls to uncover hidden endpoints or functionalities.Let’s explore some of the most prevalent authorization mistakes Flutter apps encounter, understanding what goes wrong, why these issues arise, and, most importantly, how to avoid them.
Broken Object Level Authorization (BOLA/IDOR)
Imagine you're building a Flutter-based social media app where each user creates and views their posts. To load a specific post, you might write a straightforward function like this:
At first glance, everything seems fine. After all, the user is authenticated. However, consider what happens if your backend only verifies that the user is logged in but doesn't verify if the post belongs to that user. Attackers can exploit this weakness easily by guessing or incrementing the post IDs to access other users' posts:
If these identifiers (1024, 1025, 1026) are sequential or easily predictable, you've unintentionally allowed attackers to access sensitive content belonging to other users. This is precisely what's known as Broken Object Level Authorization (BOLA), also called Insecure Direct Object Reference (IDOR), which isone of the most commonly exploited vulnerabilities in APIs today.
How to Secure Your Flutter App Against BOLA
To prevent BOLA vulnerabilities effectively, you should implement the following best practices clearly and consistently:
1. Always Verify Resource Ownership Server-Side
When handling requests for specific resources, your backend must ensure the user requesting the resource owns or has permission to access it.
2. Use UUIDs or Non-Predictable Identifiers
Instead of using sequential numbers (like 1024, 1025, etc.), use universally unique identifiers (UUIDs). UUIDs make it virtually impossible for attackers to guess or enumerate resource identifiers.For example, your API endpoint might look like this:
You can easily generate UUIDs in Dart with the uuid package:
Your backend would store and reference these UUIDs, significantly reducing the risk of unauthorized access via ID enumeration.
3. Never Rely on Client-Side Authorization
It's tempting to rely on UI-level logic to hide options or functionalities a user shouldn’t access. However, attackers can bypass the client side entirely. Server-side checks must always be your ultimate line of defense.
Broken Function Level Authorization (BFLA)
Suppose you’re building an admin dashboard in your Flutter application. You carefully design the UI so that regular users don’t see sensitive actions like "Delete User," reserving this functionality exclusively for administrators.In your frontend, you have something like:
You might feel confident after all, regular users can't see or interact with this button, right? Unfortunately, attackers don't need a visible button to exploit your app.They can directly call your API endpoint with crafted requests, completely bypassing UI restrictions:
If your server-side logic lacks a robust check verifying user privileges, a non-admin user can effortlessly execute administrative actions like deleting users—this vulnerability is known as Broken Function Level Authorization (BFLA).
Why Does This Happen?
Developers mistakenly assume UI-level restrictions are sufficient protection.
Server-side checks for user roles or permissions are either weak or missing entirely.
Attackers can easily discover and craft API requests manually—even hidden endpoints are discoverable through reverse engineering or network analysis.
How to Secure Your Flutter App Against BFLA
Here are proven approaches to ensure your application properly validates user privileges and permissions:
1. Implement Strict Role-Based Checks on the Server
Always enforce access control logic on your backend, validating explicitly whether the user making the request has the correct privileges:
This example ensures that only an authenticated user with an admin role can perform the delete operation.
2. Validate Roles Using Secure Tokens (JWT Claims)
Use JSON Web Tokens (JWT) to encode roles and permissions securely, allowing the server to validate these details without relying on client-supplied data:
When processing requests, the server must decode and verify the JWT claims thoroughly before allowing privileged actions.
3. Log and Monitor Abnormal Access Attempts
Ensure your backend actively logs all attempts—especially unsuccessful ones—to perform sensitive actions. Implement monitoring and alerts for suspicious behavior indicating potential attempts at privilege escalation:
Improper Handling of Roles & Permissions
A common pitfall among Flutter developers is mistakenly placing trust in data controlled by clients, particularly roles and permissions. Imagine your app stores a user's role locally and includes it in request headers.Your backend API might initially look like this:
An attacker quickly realizes that changing this header from role: "user" to role: "admin" grants unrestricted administrative access:
In line with our 'Server-Side Validation' best practices, never trust client-supplied role data. Always verify roles and permissions on the backend to ensure proper authorization.
Why Does This Happen?
Developers sometimes incorrectly assume clients will behave honestly, trusting user-controlled data such as request headers or local state.
The backend lacks robust verification of user roles or permissions.
Client-side roles, stored locally or sent in request bodies or headers, can easily be tampered with.
Best Practices to Securely Handle Roles and Permissions
To protect your Flutter app effectively from role manipulation:
1. Encode Roles in Securely Signed Tokens (JWT Claims)
Use JWT (JSON Web Token) claims to encode user roles securely, ensuring they cannot be modified without detection:
2. Never Trust Client-Supplied Data for Authorization
Always perform server-side validation using secure tokens. Verify the JWT claims carefully to ensure the user's role matches the privileges required to access the requested functionality:
3. Maintain Roles in Secure Internal Databases
Always maintain roles and permissions internally on the server or through secure user databases, never trusting the client's submitted values. Use JWT claims merely to identify and cross-reference server-side data:
Exposed or Hidden Admin Endpoints
It's tempting to believe that hiding an endpoint, such as one used during testing or development, is enough to keep it secure. Developers might think, "If users can't see it, they won't find it." But in security, hiding is never enough.Imagine you've developed a Flutter-based service with a backend API that includes an internal debugging route like /beta-endpoint. Perhaps it's intended to fetch all user emails for internal testing purposes quickly. In Dart, this endpoint might initially be written without proper protection, like this:
Developers might forget or consider it safe because it doesn't appear in the app's UI. However, attackers regularly use automated tools, network analyzers, or reverse engineering techniques to uncover hidden API endpoints. Once discovered, endpoints lacking proper authentication and authorization become glaring vulnerabilities, exposing sensitive user data to unauthorized actors.Here is a good example:
Why Does This Happen? (OWASP Insight)
Developers often mistakenly rely on "security through obscurity," assuming hidden or undocumented endpoints won't be discovered.
Endpoint enumeration via automated scanning, fuzzing, or app reverse-engineering is common among attackers, quickly revealing these "hidden" routes.
Leftover endpoints from testing phases frequently remain unsecured and active, silently waiting to be exploited.
Best Practices for Securing Your API Endpoints
To ensure hidden or administrative endpoints don't compromise your app's security:
As part of your deployment process, regularly audit all API routes, identifying endpoints that aren't meant for public use.
Separate development and production environments—remove or fully disable development-only endpoints before your app reaches production.
Apply strict, role-appropriate authentication and authorization to every endpoint, regardless of its intended use or visibility.
Additional Topics & Best Practices
To avoid repetition, here’s a concise recap of critical authentication and authorization practices covered earlier:
Strong Password Enforcement: Always enforce complexity and length rules server-side, with lockout mechanisms.
Secure Token Storage: Use flutter_secure_storage to store JWT tokens safely.
Apart from these, let me review a few more practical tips.
Role-Based Access Control (RBAC)
Role-Based Access Control (RBAC) maps users to predefined roles (e.g., Admin, Editor, Viewer) and limits each role to specific permissions. This prevents users from stepping outside their assigned boundaries. Instead of checking individual user permissions every time, the system checks the role, and that role has known capabilities.
Define Roles. Decide what roles your application needs. Keep them minimal and purposeful (e.g., an e-commerce platform might have Customer, Seller, Admin).
Assign Permissions. Each role has a set of allowed actions, like CreateOrder, ModifyProduct, or DeleteUser
A Flutter UI can use roles to hide or show features, but the critical check remains on the server. Even if someone modifies the Flutter app to expose an admin feature, the server should reject the request if their role is not actually Admin.
Third-party authentication services like Google or Apple Sign-In offer users a seamless login experience. But convenience doesn't automatically equal security. Your backend must independently verify tokens provided by third-party services to ensure authenticity.A typical Google Sign-In integration in Flutter might look like this:
Never assume the ID token is valid just because Google issued it. Always verify the token server-side before granting access.Here’s how you might implement a token validation service in Dart using HTTP:
Principle of Least Privilege
The "Principle of Least Privilege" is essential to secure authorization. Simply put, users should only have the minimum permissions necessary to perform their tasks—no more, no less.Consider a Flutter e-commerce app scenario where sellers manage product listings. Each seller should only manage their own items. Granting broader administrative privileges unnecessarily exposes sensitive data or functionality.Why this matters:
Excessive permissions amplify the impact if an account is compromised.
Attackers thrive in environments with broadly assigned roles.
Regularly audit permissions to remove unnecessary privileges.
Temporarily elevate permissions only when essential, reverting immediately afterward.
JWT role example:
Server-side enforcement
Runtime Protection in Flutter (RASP)
You’ve implemented secure authentication, stored tokens safely, and enforced strict role-based access on your backend. You’ve checked all the boxes. But here’s the uncomfortable truth: even with all that in place, your app can still be tampered with—at runtime—especially on rooted or jailbroken devices.This is where Runtime Application Self-Protection (RASP) becomes critical. Unlike static protections, RASP monitors your app’s environment in real time, detecting and responding to suspicious behavior while it is running.
Real-World Attack Scenario
Consider this scenario: You've meticulously secured your Flutter app by enforcing strong password policies, securely storing tokens, and implementing rigorous role-based access control on your backend APIs. You feel confident your app is secure.However, an attacker installs your app on a rooted device using sophisticated tools like Frida or Xposed. They can bypass local biometric authentication checks, intercept and manipulate API requests, and disable crucial security logic. Without runtime protection measures, you'd likely never detect this active manipulation, exposing sensitive user data.
How to Defend Against Runtime Threats with freeRASP
To close this final security gap, consider integrating Runtime Application Self-Protection (RASP) into your Flutter app. RASP actively monitors and responds to runtime threats, significantly reducing the risk of real-time app tampering.One effective Flutter-compatible RASP solution is , which offers easy-to-integrate runtime threat detection. Here's a practical example of how simple it is to set up your Flutter project:
What Does freeRASP Detect?
freeRASP continuously scans and detects:
Rooted or jailbroken devices
Debugger attachment
Emulator usage
Why Runtime Protection is Essential
Even the most substantial design-level security can fail when attackers actively tamper with your app during runtime. RASP provides a critical, proactive defense layer, monitoring your app's environment continuously. It detects threats as they occur and allows your app to react in real-time, making it significantly harder for attackers to succeed.In essence, Runtime Application Self-Protection bridges the critical gap between security by design and security in practice, giving you peace of mind that your Flutter app remains protected, no matter how sophisticated the attack.Here is a layered security model to show what are the level of check for a simple input:
Now let me introduce you a quick checklist that can help you be on top of security of your app.
Checklist for Secure Flutter Authentication & Authorization:
[ ] Enforce strong password policies with both client- and server-side validation.
[ ] Store tokens using secure methods (e.g., flutter_secure_storage).
[ ] Implement multi-factor authentication (MFA) to add extra security layers.
[ ] Validate user roles and permissions exclusively on the server.
[ ] Use runtime protection (e.g., freeRASP) to detect live threats.
[ ] Regularly audit and remove unnecessary endpoints and debugging routes.
Conclusion
Never rely solely on your Flutter app’s UI for access control. Assume every device is potentially compromised, validating all actions server-side and layering defenses—secure token storage, multi-factor authentication, biometrics, and passwords. Runtime protection (RASP) detects and actively responds to live threats.Securing authentication and authorization in Flutter isn't a one-time fix—it's an ongoing process. By consistently applying these best practices, you'll build apps users can trust, enabling your team to scale securely and frustrating attackers at every step.
Unsecured transactions could lead to fraudulent actions without the user's knowledge or consent.
Worst of all, critical administrative operations could fall into malicious hands, allowing attackers to disrupt or damage your entire system.
Placing blind trust in user-supplied data for permissions and roles, making privilege escalation trivial.
Secure Sensitive Operations: Operations involving sensitive data or administrative functions must always be validated on the server to prevent unauthorized access, even if the user interface hides certain options.
Centralized Logging and Monitoring: Incorporate logging of all critical actions and validation failures. This centralized logging not only helps in early detection of suspicious activities but also aids in post-incident analysis.
Mandate a mix of uppercase letters, lowercase letters, numbers, and special characters.
Protect against automated attacks using rate limiting, account lockouts, or progressive delays after multiple failed login attempts.
Store all session tokens securely, leveraging encrypted local storage mechanisms like flutter_secure_storage.
Proper Biometric Use: Employ biometrics to unlock secure tokens; never rely solely on biometrics for sensitive data access.
Robust Session Management: Issue short-lived JWT access tokens with secure refresh tokens. Always validate JWT claims and include necessary metadata.
.
Enforce at the Server. The server verifies the user’s role, typically from a JWT claim or a session lookup, and checks whether the requested action is permitted.
Binary tampering or re-signing
SSL pinning bypass attempts
Majid Hajian - Azure & AI advocate, Dart & Flutter community leader, Organizer, author
TextFormField(
obscureText: true,
decoration: InputDecoration(labelText: 'Password'),
validator: (value) {
if (value == null || value.isEmpty) {
return 'Please enter your password';
}
// Ideally add length and complexity checks here
return null;
},
);
// Insecure storage: tokens are stored in plain text, making them vulnerable on rooted devices.
final prefs = await SharedPreferences.getInstance();
await prefs.setString('authToken', token); // store in pure text
// Secure storage: tokens are stored using hardware-backed mechanisms.
final secureStorage = FlutterSecureStorage();
await secureStorage.write(key: 'authToken', value: token);
import 'package:local_auth/local_auth.dart';
final localAuth = LocalAuthentication();
Future<bool> authenticateWithBiometrics() async {
final isAvailable = await localAuth.canCheckBiometrics;
if (!isAvailable) return false;
return await localAuth.authenticate(
localizedReason: 'Authenticate to access secure notes',
options: const AuthenticationOptions(biometricOnly: true),
);
}
// A request made without any authentication header
// If the backend fails to verify a token, it may allow data retrieval.
final response = await http.get(Uri.parse('https://api.example.com/test-patient-info'));
if (response.statusCode == 200) {
print('Data: ${response.body}');
} else {
print('Unauthorized or Not Found');
}
GET https://api.example.com/posts/1024
GET https://api.example.com/posts/1025
GET https://api.example.com/posts/1026
GET https://api.example.com/posts/4a1f23e2-74f8-4915-bb69-ec8f5b1c3d2a
import 'package:uuid/uuid.dart';
final uuid = Uuid();
// Creating a new post with a unique UUID
String newPostId = uuid.v4(); // Generates a random UUID
final response = await http.post(
Uri.parse('https://api.example.com/admin/deleteUser'),
body: {'userId': 'targetUserId'},
);
import 'dart:convert';
import 'package:shelf/shelf.dart';
import 'package:shelf/shelf_io.dart' as io;
import 'package:shelf_router/shelf_router.dart';
// Define an authenticated user model
class AuthenticatedUser {
final String id;
final String role;
AuthenticatedUser({required this.id, required this.role});
}
// Handler function for deleting a user
Future<Response> deleteUserHandler(Request request) async {
// Retrieve the authenticated user from the request context.
final user = request.context['user'] as AuthenticatedUser?;
if (user == null || user.role != 'admin') {
// If the user is not authenticated or not an admin, return 403 Forbidden.
return Response.forbidden(
jsonEncode({'message': 'Unauthorized action'}),
headers: {'Content-Type': 'application/json'},
);
}
// Parse the request body to get the target user ID.
final payload = jsonDecode(await request.readAsString());
final targetUserId = payload['userId'];
// Delete the user from the database.
await deleteUserFromDatabase(targetUserId);
// Return a successful response.
return Response.ok(
jsonEncode({'message': 'User deleted successfully'}),
headers: {'Content-Type': 'application/json'},
);
}
// Mock function to simulate deleting a user from a database.
Future<void> deleteUserFromDatabase(String userId) async {
// Implement your deletion logic here.
print('Deleting user with ID: $userId');
}
// Create and configure the router
Router getRouter() {
final router = Router();
router.post('/admin/deleteUser', deleteUserHandler);
return router;
}
void main() async {
final router = getRouter();
// Create a pipeline with logging middleware.
final handler = const Pipeline()
.addMiddleware(logRequests())
.addHandler(router);
// Start the server on localhost at port 8080.
final server = await io.serve(handler, 'localhost', 8080);
print('Server listening on port ${server.port}');
}
import 'dart:convert';
import 'package:shelf/shelf.dart';
Future<Response> adminDashboardHandler(Request request) async {
final user = request.context['user'] as AuthenticatedUser;
if (user.role != 'admin') {
return Response.forbidden(
jsonEncode({'message': 'Forbidden'}),
headers: {'Content-Type': 'application/json'},
);
}
final dashboardData = await getAdminDashboardData();
return Response.ok(
jsonEncode(dashboardData),
headers: {'Content-Type': 'application/json'},
);
}
// Example supporting classes/functions:
class AuthenticatedUser {
final String id;
final String role;
AuthenticatedUser({required this.id, required this.role});
}
Future<Map<String, dynamic>> getAdminDashboardData() async {
// Fetch and return dashboard data securely
return {
'userCount': 2500,
'activeSessions': 123,
// Add additional admin-specific metrics here
};
}
// Dart representation of secure JWT payload
class JwtPayload {
final String userId;
final String role;
JwtPayload({required this.userId, required this.role});
}
import 'dart:convert';
import 'package:shelf/shelf.dart';
import 'package:shelf_router/shelf_router.dart';
// ⚠️ Insecure debug endpoint left active in production!
Router insecureRouter() {
final router = Router();
router.get('/beta-endpoint', (Request request) async {
final emails = await database.getAllUserEmails(); // No authentication!
return Response.ok(
jsonEncode({'emails': emails}),
headers: {'Content-Type': 'application/json'},
);
});
return router;
}
import 'dart:convert';
import 'package:shelf/shelf.dart';
import 'package:shelf_router/shelf_router.dart';
// Example user model
class AuthenticatedUser {
final String id;
final String role;
AuthenticatedUser({required this.id, required this.role});
}
// Secure endpoint handler with authentication & authorization
Router secureRouter() {
final router = Router();
router.get('/beta-endpoint', (Request request) async {
final user = request.context['user'] as AuthenticatedUser?;
// Ensure endpoint is accessible only by authorized internal roles
if (user == null || user.role != 'admin') {
return Response.forbidden(
jsonEncode({'message': 'Forbidden'}),
headers: {'Content-Type': 'application/json'},
);
}
final emails = await database.getAllUserEmails();
return Response.ok(
jsonEncode({'emails': emails}),
headers: {'Content-Type': 'application/json'},
);
});
return router;
}
// Mock database function (example)
class database {
static Future<List<String>> getAllUserEmails() async {
return ['[email protected]', '[email protected]'];
}
}
import 'package:google_sign_in/google_sign_in.dart';
final GoogleSignIn _googleSignIn = GoogleSignIn(scopes: ['email']);
final account = await _googleSignIn.signIn();
final auth = await account?.authentication;
final idToken = auth?.idToken;
final accessToken = auth?.accessToken;
// Send ID token securely to your backend for validation
await http.post(
Uri.parse('https://your-api.com/auth/google'),
headers: {'Content-Type': 'application/json'},
body: jsonEncode({'idToken': idToken}),
);
import 'dart:convert';
import 'package:http/http.dart' as http;
import 'package:shelf/shelf.dart';
const googleClientId = 'your-google-client-id.apps.googleusercontent.com';
Future<Response> googleAuthHandler(Request request) async {
final body = await request.readAsString();
final data = jsonDecode(body);
final idToken = data['idToken'];
final googleVerificationUrl =
'https://oauth2.googleapis.com/tokeninfo?id_token=$idToken';
final response = await http.get(Uri.parse(googleVerificationUrl));
if (response.statusCode != 200) {
return Response.forbidden(
jsonEncode({'error': 'Invalid Google token'}),
headers: {'Content-Type': 'application/json'},
);
}
final payload = jsonDecode(response.body);
if (payload['aud'] != googleClientId) {
return Response.forbidden(
jsonEncode({'error': 'Invalid audience'}),
headers: {'Content-Type': 'application/json'},
);
}
final userId = payload['sub'];
final email = payload['email'];
// TODO: Create or fetch the user in your own system
final sessionToken = await createSessionForUser(userId, email);
return Response.ok(
jsonEncode({'sessionToken': sessionToken}),
headers: {'Content-Type': 'application/json'},
);
}
Future<String> createSessionForUser(String userId, String email) async {
// Your logic to create or resume a user session securely
return 'secure-session-token-for-$userId';
}
Each is a critical piece of the mobile security puzzle.
In this seventh article, we focus on M7: Insufficient Binary Protection, a risk that doesn't hide in your code logic, network calls, or database queries. Instead, it targets something more fundamental: the compiled Flutter app itself. When you ship your app to users, you're essentially handing over a complete package of your business logic, algorithms, and sometimes your secrets too. Without proper protection, attackers can reverse engineer, tamper with, or redistribute your application as they see fit.
I've seen this vulnerability underestimated more times than I can count. Developers often assume that because Dart code gets compiled to native machine code, it's somehow "safe." The reality? It's just a different challenge for attackers, not an impossible one. Tools like Ghidra, Frida, and even Flutter-specific utilities like reFlutter make binary analysis more accessible than ever.
Let's get started.
Source code: All code examples from this article are available as a runnable Flutter project on GitHub:
Understanding Binary Protection in Mobile Apps
What is Binary Protection?
Binary protection is about safeguarding your compiled app from being analyzed, modified, or misused. To really get why it matters, just think about what actually happens when you build a Flutter app.
Your Dart code doesn't stay as readable source code — it gets compiled into machine code for iOS or a combination of native libraries and Dart AOT (Ahead-of-Time) snapshots for Android. This compiled binary is what users download from app stores, and here's the uncomfortable truth: it contains everything:
Your business logic and proprietary algorithms
Hardcoded secrets (API keys, encryption keys—yes, even if you think you've hidden them)
I know what you're thinking, "But it's compiled, surely that's secure enough?" I've heard that reasoning a lot. Without adequate protection, attackers can do more than you'd expect:
Why Flutter Apps Are Targets
I've been asked many times whether Flutter apps are inherently safer than React Native or web apps.
The honest answer is as I always say: it depends.
Flutter has real strengths, but also some quirks that make it a unique target.
Take Dart's Ahead-of-Time (AOT) compilation, for example. Your Dart code gets turned into native machine code before execution, unlike Just-in-Time (JIT) compilation, which happens at runtime. That's more secure than shipping plaintext JavaScript, no doubt. But it's not immune to reverse engineering. Skilled attackers with the right tools can still extract meaningful information from that compiled code.
What catches many developers off guard is where that compiled code actually lives—I'll walk you through the exact package structure in a moment, but once you see it, the need for protection becomes obvious.
The cross-platform nature of Flutter is a double-edged sword here too. Once an attacker cracks your Android app, the same techniques usually work directly on iOS—because it's the same Dart codebase underneath.
And as Flutter keeps growing in enterprise, fintech, and high-value consumer products, it's drawing more attention from attackers. That's just how it goes: the higher the stakes, the more motivated the attackers.
The Business Impact
Before we get into the technical details, I want to spend a moment on the business side of this. In my experience, binary protection is often deprioritized because it feels abstract—until something goes wrong.
According to the , insufficient binary protection can lead to significant damage across multiple dimensions:
Impact Type
Description
Example
I've personally seen startups lose months of competitive advantage because a competitor extracted their core algorithm from an unprotected APK. I've also witnessed companies face unexpected cloud bills when attackers extracted API keys and used them for their own purposes. These aren't hypothetical scenarios, they happen regularly, and they often happen to apps whose developers assumed "nobody would bother."
Flutter's Binary Architecture: Know Your Attack Surface
Before we talk about defense, we need to look at the app the same way an attacker does. I find this exercise genuinely useful, once you see what's exposed, the motivation to protect it becomes a lot more concrete.
Android APK Structure
When you run flutter build apk, Flutter produces an APK file that follows a specific structure. Understanding this structure helps you appreciate where your code ends up and why certain files are targeted:
See that libapp.so file? That's the crown jewel for attackers. It contains your entire Dart application compiled to native code. Every widget, every service class, every business logic function, it's all in there. While it's compiled to machine code (not human-readable Dart), skilled reverse engineers can still extract a surprising amount of information from it.
iOS IPA Structure
The iOS story is similar. When you archive your Flutter app for iOS distribution, the IPA contains:
Just like Android's libapp.so, the App.framework/App binary contains your compiled Dart code. iOS apps benefit from Apple's code signing requirements and stricter app review process, but once someone has your IPA file, they can analyze it using the same reverse engineering techniques.
What Attackers Can Extract
Let me show you something that might make you uncomfortable. Consider this seemingly innocent code that many developers write without thinking twice:
You might think that breaking up the string or using a method makes it harder to find. It doesn't. An attacker running the strings command on your binary—or using more advanced tools like Ghidra, can easily locate these:
The folder tree looks like this:
Then run (Linux/macOS):
Output: you may see many matches. These are the ones to look for:
It really is that simple. If you've hardcoded secrets in your Dart code, consider them already compromised the moment you publish your app.
Attack Vectors: How Attackers Target Flutter Apps
Now that we understand what's inside a Flutter app, let's examine the specific techniques attackers use. Understanding these attack vectors isn't about learning to attack, it's about knowing what you're defending against.
1. Static Analysis (Reverse Engineering)
Static analysis involves examining your app without running it. Attackers use disassemblers and decompilers to dig into your binary and understand its structure. It's like reading a book—they don't need the app to be running to learn from it.
Common Tools Used Against Flutter Apps:
Here's a look at the tools attackers commonly use. Most of these are freely available and well-documented:
Tool
Platform
Purpose
That last one——deserves special attention. It's specifically designed to reverse engineer Flutter apps, and it's particularly dangerous because it understands Flutter's architecture:
Notice how it reveals function names like LicenseChecker.verifyPremium? An attacker now knows exactly where to look if they want to bypass your premium features.
2. Dynamic Analysis (Runtime Attacks)
While static analysis is like reading a book, dynamic analysis is like watching a movie—attackers observe (and manipulate) your app while it's running. Tools like Frida allow attackers to "hook" into your app at runtime and modify its behavior on the fly.
Here's a real example of a Frida script that could bypass a license check in a Flutter app:
The attacker doesn't need to understand your entire codebase. They just need to find the function that checks for premium access and make it always return true. With the function names exposed by static analysis, this becomes a focused attack rather than a blind search.
3. Binary Patching (Code Tampering)
Sometimes attackers don't bother with real-time manipulation, they simply modify your app's binary directly and redistribute it. This is particularly common for apps with premium features, in-app purchases, or ad-supported business models.
The steps are surprisingly simple:
Once the modified APK is created, it can be uploaded to third-party app stores, shared on forums, or distributed through other channels. Users who install these cracked versions get your premium features for free, and you lose revenue. Worse, if the attacker injected malicious code alongside the cracks, those users' devices (and your app's reputation) are compromised.
4. Real-World Attack Scenario
Let me walk you through a realistic attack scenario to make this more concrete. Imagine you've built a Flutter fintech app with premium trading features:
Here's how an attacker would compromise this:
Step 1: Use reFlutter to dump function names, discovering TradingFeatures.isPremiumUser and TradingFeatures.executeAdvancedTrade
Step 2: Use Ghidra or IDA Pro to locate the check for isPremiumUser in the compiled binary
The entire process might take an experienced attacker a few hours. Your premium features are now available to anyone who downloads the cracked APK.
Protecting Your Flutter Apps: Defense Strategies
Let's shift to defense. You can't make your app completely unbreakable (nothing is), but you can make attacking it significantly more difficult, time-consuming, and unreliable. The goal is to make the cost of attacking your app higher than the potential reward.
1. Code Obfuscation
Code obfuscation is your first line of defense. It doesn't prevent reverse engineering entirely, but it makes the attacker's job much harder by replacing meaningful names with gibberish and making the code structure more difficult to follow.
The good news? Flutter provides that you should always enable for release builds. It's a simple flag, but it's surprising how many developers forget to use it.
Enabling Flutter Obfuscation
Here's how to build your app with obfuscation enabled:
What does --obfuscate actually do?
It renames your classes, methods, and fields to meaningless names. The result is that your stack traces become unreadable (without the symbol map), and reverse engineers have a much harder time understanding what each function does.
Before obfuscation:
After obfuscation:
Now, instead of immediately knowing that LicenseChecker.verifyPremium is the function to target, an attacker sees qR.sT() and has no idea what it does without significant additional analysis.
Preserving Debug Information
"But wait," you might be thinking, "if my stack traces are unreadable, how do I debug production crashes?"
Great question! The --split-debug-info flag is the answer. It saves the symbol mapping to a separate directory that you keep secure but don't ship with your app. When a crash occurs, you can use these symbols to translate the obfuscated stack trace back to meaningful code:
The key is to upload the debug symbols to your crash reporting service. This way, you see readable crash reports in your dashboard, but attackers only see obfuscated gibberish:
Advanced Obfuscation with ProGuard (Android)
For Android, you can add an extra layer of protection using ProGuard (or R8, which is now the default). While Flutter's Dart code is obfuscated by the --obfuscate flag, any Kotlin/Java code (including plugin code) benefits from ProGuard.
Configure it in your android/app/build.gradle:
Then create android/app/proguard-rules.pro with rules to keep Flutter's necessary classes while obfuscating everything else:
Note: The directive -optimizationpasses is ignored by R8 (Android's default shrinker since AGP 3.4). R8 determines the optimal number of passes internally. If you're on a recent Android Gradle Plugin version, you're already using R8; the ProGuard format is accepted for compatibility, but R8-specific flags may differ.
See the .
2. Protecting Sensitive Strings and Keys
This is perhaps the most critical section of this entire article. I cannot stress this enough: never hardcode secrets in your Dart code. Not API keys, not encryption keys, not tokens, nothing. They will be extracted.
Let's look at some alternatives, from simple to more advanced:
Using Environment Variables (Build-time)
The simplest improvement is to use Dart's String.fromEnvironment to inject secrets at build time rather than embedding them in source code:
Important caveat: While this keeps secrets out of your source code repository (a good practice for version control), the strings still end up in the compiled binary. An attacker examining your libapp.so can still find them. This approach is better for organization and CI/CD practices. But it's not a security silver bullet.
Using Native Code for Extra Protection
If you absolutely must include a secret in your app (though I'd encourage you to question whether that's really necessary), storing it in native code adds an extra layer of difficulty for attackers. Native code (C/C++) is generally harder to reverse engineer than Dart code.
Here's an example of how to set this up:
Android (Kotlin) - Create a method channel:
C++ Native Library (secrets.cpp) - Add simple obfuscation:
Dart side - Call the native method:
A word of caution: Native code makes extraction harder, not impossible. A determined attacker with Ghidra and enough time can still figure it out. This is a speed bump, not a wall.
Using Secure Remote Configuration
Here's the truth that many developers don't want to hear: the most secure approach is to never ship secrets with your app at all. Instead, fetch sensitive configuration from a secure server at runtime.
Firebase Remote Config is one popular option for this pattern:
With this approach, even if an attacker completely reverse engineers your app, they won't find the API key because it was never there to begin with. They'd have to intercept network traffic or compromise your Firebase project, both of which are significantly more difficult than extracting a string from a binary.
3. Integrity Verification and Anti-Tampering
Obfuscation and secret management are about making attack harder. But what about detecting when an attack has already happened? Integrity verification allows your app to check whether it has been modified, and respond appropriately.
The idea is simple: your app knows what it should look like. If something's different, sound the alarm.
Signature Verification (Android)
Android apps are digitally signed before distribution. When you upload your app to the Play Store, it's signed with your release key. If an attacker modifies your app and re-signs it (which they must do for the modified APK to install), the signature changes.
Here's how to implement signature verification:
The Android native implementation does the heavy lifting:
App Integrity Check with Hash Verification
Beyond signature checking, you can also verify that your binary files haven't been modified by computing and comparing hash values:
This approach requires you to generate the expected hash during your build process and embed it in the app. There's an inherent chicken-and-egg problem here: the hash lives inside the binary, but computing the hash changes the binary. The standard workaround is to hash a file other than the one containing the expected value (e.g., hash libflutter.so or specific asset files), or perform the verification entirely on the server by uploading the hash at startup. A purely client-side self-check will always have this limitation.
Building a Comprehensive Integrity Service
In practice, you'll want to combine multiple integrity checks into a single service that can assess your app's overall security posture:
Here is a screenshot of the decompiled app in JADX:
4. Root/Jailbreak Detection
A rooted Android device or jailbroken iOS device gives users (and attackers) elevated privileges that bypass the normal security sandbox. On such devices, other apps can read your app's private storage, attach debuggers, and intercept communications.
Detecting these compromised environments is an important defense layer. Here's a comprehensive approach:
The Android implementation needs to check multiple indicators because advanced rooting tools like Magisk actively try to hide themselves:
5. Anti-Debugging Protection
Debuggers are powerful tools, for developers and attackers alike. In a production app, there's no legitimate reason for a debugger to be attached. Detecting debugger attachment can help identify when your app is being analyzed at runtime.
On Android, you can also detect Frida specifically, which is the tool of choice for many attackers:
Android implementation with Frida detection:
A word of caution: determined attackers can hook your anti-debugging code and make it always return "no debugger found." This is why defense in depth matters, no single check is sufficient, but multiple layers make attacks significantly harder.
6. Using freeRASP for Comprehensive Protection
Implementing all these protections from scratch is a significant undertaking. Fortunately, there are libraries that package these protections together. One excellent option for production apps is by Talsec, which provides comprehensive Runtime Application Self-Protection.
What I appreciate about freeRASP is that it handles the cross-platform complexity for you—writing native security code for both Android and iOS is time-consuming, and getting it wrong can create false security. Here's how to integrate it:
7. Server-Side Validation: The Most Important Defense
I've saved this for last, but it's probably the most important concept in this entire article: never trust the client. No matter how much protection you add to your app, a sufficiently determined attacker can eventually bypass it. The only truly secure approach is to validate critical operations server-side. This is sometimes called the "zero trust client" principle — see the for why client-side checks alone are never sufficient.
Think about it this way: every line of code that runs on a user's device is potentially compromised. The server, on the other hand, is under your control. Here's how this plays out in practice:
Server-side validation:
Protecting Specific Assets
Before we wrap up the defensive strategies, let's talk about protecting specific high-value assets in your Flutter app. Different types of assets require different protection approaches.
Protecting AI/ML Models
If your Flutter app includes embedded AI or machine learning models (increasingly common with or models), these represent significant intellectual property investments. An attacker who extracts your model can use it without paying licensing fees, or worse, sell it to competitors.
Here's an approach to encrypt your models:
Remember: the decryption key is still a secret that needs protection, so this loops back to our earlier discussion about not hardcoding secrets.
Protecting Business Logic
This is a philosophical point as much as a technical one. Ask yourself: does this logic need to be on the client? For critical business rules, pricing algorithms, fraud detection, premium feature gates, the answer is often "no."
Compare these two approaches:
Binary Protection Checklist
Before deploying your Flutter app, run through this checklist. I've organized it by when in your development process you should address each item:
Build-Time Protection
Enable --obfuscate flag for all release builds
Use --split-debug-info and securely store symbols
Configure ProGuard/R8 for Android
Secret Management
No hardcoded API keys or secrets in Dart code
Use --dart-define for build-time configuration
Store sensitive keys in native code (if must be in app)
Runtime Protection
Implement signature verification
Add root/jailbreak detection
Include debugger detection
Use integrity checking
Server-Side Validation
Never trust client-side license checks alone
Validate app signature on server
Implement device binding for licenses
Distribution Security
Only distribute through official stores
Monitor for unauthorized app copies
Implement app attestation
Testing Your Protection
Here's something many developers overlook: you should test your own app using the same tools attackers use. This isn't about becoming a hacker, it's about validating that your protections actually work.
Tools to Test Your Own App
If you find sensitive strings or unobfuscated function names, you know you have work to do before shipping.
Automated Security Testing
You can (and should) automate these checks in your CI/CD pipeline. Here's a GitHub Actions workflow that catches common issues before they reach production:
Conclusion
If there's one thing I want you to take away from this article, it's that protecting your Flutter app's binary is an ongoing process, not a one-time checkbox. Attackers continuously evolve their techniques, and your defenses must evolve too.
Here are the key principles to remember:
Always obfuscate release builds with --obfuscate, it's free and significantly raises the bar for attackers
Never hardcode secrets, they will be extracted, it's only a matter of time
Implement defense in depth, no single protection is enough, but layers make attacks impractical
Remember: the goal isn't to make your app impossible to crack. That's not achievable—given unlimited time and resources, any client-side protection can be defeated. The real goal is twofold: make attacking your app expensive enough that attackers move on to easier targets, and detect attacks quickly so you can respond before significant damage occurs.
In the next article, we'll explore M8: Security Misconfiguration, where we'll examine how seemingly innocent configuration settings can create serious vulnerabilities in your Flutter apps.
Runner.app/
├── Info.plist
├── Runner # Main executable
├── Frameworks/
│ ├── Flutter.framework/ # Flutter engine
│ └── App.framework/ # YOUR DART CODE (AOT compiled)
│ └── App # The actual binary
├── flutter_assets/
└── _CodeSignature/
// Example: Hardcoded API key that attackers can find
class ApiConfig {
// BAD: This string is easily extractable from the binary
static const String apiKey = 'sk-prod-a1b2c3d4e5f6g7h8i9j0';
static const String secretToken = 'super_secret_token_123';
// BAD: Even "hidden" in variables, strings are visible
static String getApiKey() {
return 'sk-prod-' + 'a1b2c3d4' + 'e5f6g7h8i9j0';
}
}
# Extracting strings from a Flutter Android binary
unzip app-release.apk -d extracted/
# reFlutter can dump your Dart code's structure
$ reflutter app-release.apk
# Output reveals function names, class structures, and more
[+] Dumping functions from libapp.so
[+] Found 2,847 Dart functions
[+] UserAuthService.login
[+] PaymentProcessor.processPayment
[+] LicenseChecker.verifyPremium
...
// Frida script to bypass a license check in a Flutter app
Java.perform(function () {
// Hook into the Flutter engine
var libapp = Module.findBaseAddress('libapp.so');
// Find and hook the license verification function
Interceptor.attach(libapp.add(0x1a2b3c), {
onEnter: function (args) {
console.log('License check called');
},
onLeave: function (retval) {
// Force return true (licensed)
retval.replace(1);
console.log('License check bypassed!');
},
});
});
// Original code in a premium trading app
class TradingFeatures {
bool isPremiumUser = false;
Future<void> checkLicense() async {
final response = await api.verifyLicense(userId);
isPremiumUser = response.isValid;
}
void executeAdvancedTrade(TradeOrder order) {
// BAD: Client-side only check
if (!isPremiumUser) {
showUpgradeDialog();
return;
}
// Execute premium trading algorithm
_runProprietaryAlgorithm(order);
}
}
# Build with obfuscation enabled
flutter build apk --release --obfuscate --split-debug-info=./debug-info
# For iOS
flutter build ios --release --obfuscate --split-debug-info=./debug-info
# For App Bundle (recommended for Play Store)
flutter build appbundle --release --obfuscate --split-debug-info=./debug-info
// In your app, you can still get readable crash reports
import 'package:firebase_crashlytics/firebase_crashlytics.dart';
void main() {
FlutterError.onError = (errorDetails) {
FirebaseCrashlytics.instance.recordFlutterFatalError(errorDetails);
};
runApp(MyApp());
}
# Upload symbols to Firebase Crashlytics
firebase crashlytics:symbols:upload --app=YOUR_APP_ID ./debug-info
# Flutter specific rules
-keep class io.flutter.app.** { *; }
-keep class io.flutter.plugin.** { *; }
-keep class io.flutter.util.** { *; }
-keep class io.flutter.view.** { *; }
-keep class io.flutter.** { *; }
-keep class io.flutter.plugins.** { *; }
# Keep your model classes if using reflection
-keep class com.yourapp.models.** { *; }
# Obfuscate everything else aggressively
-repackageclasses ''
-allowaccessmodification
// Define in your build command or CI/CD
// flutter build apk --dart-define=API_KEY=your_key_here
class SecureConfig {
// Loaded at compile time, not visible as plain string in source
static const String apiKey = String.fromEnvironment('API_KEY');
// Validate it exists
static void validateConfig() {
if (apiKey.isEmpty) {
throw StateError('API_KEY not configured. Build with --dart-define=API_KEY=xxx');
}
}
}
// android/app/src/main/kotlin/com/yourapp/SecretProvider.kt
package com.yourapp
import io.flutter.embedding.engine.plugins.FlutterPlugin
import io.flutter.plugin.common.MethodChannel
class SecretProvider : FlutterPlugin {
private lateinit var channel: MethodChannel
override fun onAttachedToEngine(binding: FlutterPlugin.FlutterPluginBinding) {
channel = MethodChannel(binding.binaryMessenger, "com.yourapp/secrets")
channel.setMethodCallHandler { call, result ->
when (call.method) {
"getApiKey" -> {
// Return from native code (still extractable but harder)
result.success(getSecretFromNative())
}
else -> result.notImplemented()
}
}
}
private external fun getSecretFromNative(): String
companion object {
init {
System.loadLibrary("secrets")
}
}
override fun onDetachedFromEngine(binding: FlutterPlugin.FlutterPluginBinding) {
channel.setMethodCallHandler(null)
}
}
#include <jni.h>
#include <string>
// XOR decryption for the API key (simple obfuscation)
extern "C" JNIEXPORT jstring JNICALL
Java_com_yourapp_SecretProvider_getSecretFromNative(JNIEnv *env, jobject /* this */) {
// Encoded key: each byte is the original char XOR'd with the mask 0x5A.
// Original plaintext "sk-prod" was encoded offline with the same mask.
unsigned char encoded[] = {0x29, 0x31, 0x77, 0x2A, 0x28, 0x35, 0x3E};
unsigned char mask = 0x5A;
std::string decoded;
for (unsigned char c : encoded) {
decoded += (char)(c ^ mask);
}
// decoded now equals "sk-prod"
return env->NewStringUTF(decoded.c_str());
}
import 'package:flutter/foundation.dart';
import 'package:flutter/services.dart';
class NativeSecrets {
static const _channel = MethodChannel('com.yourapp/secrets');
static Future<String> getApiKey() async {
try {
final key = await _channel.invokeMethod<String>('getApiKey');
return key ?? '';
} catch (e) {
debugPrint('Failed to get API key: $e');
return '';
}
}
}
// Usage
void main() async {
final apiKey = await NativeSecrets.getApiKey();
// Use the key...
}
import 'package:firebase_remote_config/firebase_remote_config.dart';
import 'package:flutter_secure_storage/flutter_secure_storage.dart';
class SecureConfigService {
final _remoteConfig = FirebaseRemoteConfig.instance;
final _secureStorage = const FlutterSecureStorage();
Future<void> initialize() async {
await _remoteConfig.setConfigSettings(RemoteConfigSettings(
fetchTimeout: const Duration(minutes: 1),
minimumFetchInterval: const Duration(hours: 1),
));
await _remoteConfig.fetchAndActivate();
}
Future<String> getApiKey() async {
// First, try to get from secure storage (cached)
final cached = await _secureStorage.read(key: 'api_key');
if (cached != null) return cached;
// Fetch from remote config
final key = _remoteConfig.getString('api_key');
// Cache securely
await _secureStorage.write(key: 'api_key', value: key);
return key;
}
}
import 'dart:io';
import 'package:flutter/services.dart';
class IntegrityChecker {
static const _channel = MethodChannel('com.yourapp/integrity');
/// Verify the app's signing certificate
static Future<bool> verifySignature() async {
if (!Platform.isAndroid) return true;
try {
final isValid = await _channel.invokeMethod<bool>('verifySignature');
return isValid ?? false;
} catch (e) {
// If we can't verify, assume compromised
return false;
}
}
/// Get the current signature hash for comparison
static Future<String?> getSignatureHash() async {
if (!Platform.isAndroid) return null;
try {
return await _channel.invokeMethod<String>('getSignatureHash');
} catch (e) {
return null;
}
}
}
// android/app/src/main/kotlin/com/yourapp/IntegrityPlugin.kt
package com.yourapp
import android.content.pm.PackageManager
import android.os.Build
import io.flutter.embedding.engine.plugins.FlutterPlugin
import io.flutter.plugin.common.MethodChannel
import java.security.MessageDigest
class IntegrityPlugin : FlutterPlugin {
private lateinit var channel: MethodChannel
private lateinit var context: android.content.Context
// Your release signing certificate SHA-256 hash
private val VALID_SIGNATURES = listOf(
"A1:B2:C3:D4:E5:F6:G7:H8:I9:J0:K1:L2:M3:N4:O5:P6:Q7:R8:S9:T0:U1:V2:W3:X4"
)
override fun onAttachedToEngine(binding: FlutterPlugin.FlutterPluginBinding) {
context = binding.applicationContext
channel = MethodChannel(binding.binaryMessenger, "com.yourapp/integrity")
channel.setMethodCallHandler { call, result ->
when (call.method) {
"verifySignature" -> result.success(verifyAppSignature())
"getSignatureHash" -> result.success(getAppSignatureHash())
else -> result.notImplemented()
}
}
}
private fun verifyAppSignature(): Boolean {
val currentHash = getAppSignatureHash() ?: return false
return VALID_SIGNATURES.contains(currentHash)
}
private fun getAppSignatureHash(): String? {
return try {
val packageInfo = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
context.packageManager.getPackageInfo(
context.packageName,
PackageManager.GET_SIGNING_CERTIFICATES
)
} else {
@Suppress("DEPRECATION")
context.packageManager.getPackageInfo(
context.packageName,
PackageManager.GET_SIGNATURES
)
}
val signatures = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.P) {
packageInfo.signingInfo.apkContentsSigners
} else {
@Suppress("DEPRECATION")
packageInfo.signatures
}
val signature = signatures.firstOrNull() ?: return null
val md = MessageDigest.getInstance("SHA-256")
val digest = md.digest(signature.toByteArray())
digest.joinToString(":") { "%02X".format(it) }
} catch (e: Exception) {
null
}
}
override fun onDetachedFromEngine(binding: FlutterPlugin.FlutterPluginBinding) {
channel.setMethodCallHandler(null)
}
}
import 'dart:io';
import 'dart:convert';
import 'package:crypto/crypto.dart';
import 'package:flutter/foundation.dart';
import 'package:flutter/services.dart';
class BinaryIntegrityChecker {
static const _channel = MethodChannel('com.yourapp/integrity');
// Expected hash of your release binary (generate during build)
static const String _expectedLibappHash = 'abc123...'; // SHA-256
/// Verifies that libapp.so has not been tampered with.
/// The native library path must be resolved from native code
/// because Dart's path_provider cannot locate it reliably.
static Future<bool> verifyBinaryIntegrity() async {
if (!Platform.isAndroid) return true;
try {
// Delegate to native code, which uses
// context.applicationInfo.nativeLibraryDir to locate libapp.so
final hash = await _channel.invokeMethod<String>('getLibappHash');
if (hash == null) return false;
return hash == _expectedLibappHash;
} catch (e) {
debugPrint('Integrity check failed: $e');
return false;
}
}
}
import 'package:flutter/foundation.dart';
enum IntegrityStatus {
valid,
signatureMismatch,
debuggerAttached,
emulatorDetected,
rootDetected,
tamperingDetected,
unknown,
}
class AppIntegrityService {
static final AppIntegrityService _instance = AppIntegrityService._internal();
factory AppIntegrityService() => _instance;
AppIntegrityService._internal();
final List<IntegrityStatus> _violations = [];
List<IntegrityStatus> get violations => List.unmodifiable(_violations);
bool get isCompromised => _violations.isNotEmpty;
Future<void> performFullCheck() async {
_violations.clear();
// Check signature
if (!await IntegrityChecker.verifySignature()) {
_violations.add(IntegrityStatus.signatureMismatch);
}
// Check for debugger (in release mode)
if (kReleaseMode && await _isDebuggerAttached()) {
_violations.add(IntegrityStatus.debuggerAttached);
}
// Check for emulator
if (await _isRunningOnEmulator()) {
_violations.add(IntegrityStatus.emulatorDetected);
}
// Check for root/jailbreak
if (await _isDeviceRooted()) {
_violations.add(IntegrityStatus.rootDetected);
}
}
Future<bool> _isDebuggerAttached() async {
// Platform-specific implementation
return false; // Placeholder
}
Future<bool> _isRunningOnEmulator() async {
// Check various emulator indicators
return false; // Placeholder
}
Future<bool> _isDeviceRooted() async {
// Check for root indicators
return false; // Placeholder
}
void handleViolations() {
if (!isCompromised) return;
for (final violation in _violations) {
switch (violation) {
case IntegrityStatus.signatureMismatch:
// Critical: App has been re-signed
_handleCriticalViolation('Application signature mismatch detected');
break;
case IntegrityStatus.debuggerAttached:
// High: Someone is debugging in production
_handleHighViolation('Debugger detected');
break;
case IntegrityStatus.rootDetected:
// Medium: Running on rooted device
_handleMediumViolation('Rooted device detected');
break;
default:
break;
}
}
}
void _handleCriticalViolation(String message) {
// Log to security backend
// Clear sensitive data
// Show warning or exit app
debugPrint('CRITICAL SECURITY VIOLATION: $message');
}
void _handleHighViolation(String message) {
debugPrint('HIGH SECURITY VIOLATION: $message');
}
void _handleMediumViolation(String message) {
debugPrint('MEDIUM SECURITY VIOLATION: $message');
}
}
private fun isDebuggerAttached(): Boolean {
return android.os.Debug.isDebuggerConnected() ||
android.os.Debug.waitingForDebugger()
}
// More advanced: Check for Frida
private fun isFridaDetected(): Boolean {
// Check for Frida server port
try {
val socket = java.net.Socket("127.0.0.1", 27042)
socket.close()
return true
} catch (e: Exception) {
// Port not open, good
}
// Check for Frida libraries in memory
try {
val maps = File("/proc/self/maps").readText()
if (maps.contains("frida") || maps.contains("gadget")) {
return true
}
} catch (e: Exception) {
// Couldn't read maps
}
return false
}
import 'package:flutter/foundation.dart';
import 'package:freerasp/freerasp.dart';
class SecurityService {
late Talsec _talsec;
Future<void> initialize() async {
// Configure Talsec
final config = TalsecConfig(
androidConfig: AndroidConfig(
packageName: 'com.yourapp.app',
signingCertHashes: [
// Your release signing certificate SHA-256
'AKoRuyLMM91E7lX/Zqp3u4jMmd0A7hH/Iqozu0TMVd0='
],
supportedStores: ['com.android.vending'], // Google Play only
malwareConfig: MalwareConfig(
blacklistedPackageNames: [
'com.known.malware.app',
],
suspiciousPermissions: [
['android.permission.CAMERA', 'android.permission.RECORD_AUDIO'],
],
),
),
iosConfig: IOSConfig(
bundleIds: ['com.yourapp.app'],
teamId: 'YOUR_TEAM_ID',
),
watcherMail: '[email protected]',
isProd: true,
);
// Set up threat callbacks
final callback = ThreatCallback(
onAppIntegrity: () => _handleThreat('App integrity violation'),
onDebug: () => _handleThreat('Debugger detected'),
onDeviceBinding: () => _handleThreat('Device binding violation'),
onDeviceID: () => _handleThreat('Device ID manipulation'),
onHooks: () => _handleThreat('Hooking framework detected'),
onPasscode: () => _handleThreat('No device passcode'),
onPrivilegedAccess: () => _handleThreat('Root/jailbreak detected'),
onSecureHardwareNotAvailable: () => _handleThreat('No secure hardware'),
onSimulator: () => _handleThreat('Running on simulator'),
onUnofficialStore: () => _handleThreat('Installed from unofficial store'),
);
// Start protection
_talsec = Talsec.instance;
await _talsec.start(config, callback: callback);
}
void _handleThreat(String threat) {
debugPrint('SECURITY THREAT: $threat');
// Options:
// 1. Log to analytics
// 2. Show warning to user
// 3. Disable sensitive features
// 4. Force logout
// 5. Exit app
// Example: Disable sensitive features
AppState.instance.setSensitiveFeaturesEnabled(false);
// Example: Report to backend
SecurityReporter.reportThreat(threat);
}
}
// Client-side code
class LicenseService {
Future<bool> checkPremiumAccess() async {
// Local check (can be bypassed)
final localLicense = await _secureStorage.read(key: 'license');
if (localLicense == null) return false;
// CRITICAL: Always verify with server for sensitive operations
try {
final response = await _api.verifyLicense(
licenseKey: localLicense,
deviceId: await _getDeviceId(),
appSignature: await IntegrityChecker.getSignatureHash(),
);
if (!response.isValid) {
// Server says license is invalid - trust the server
await _secureStorage.delete(key: 'license');
return false;
}
return true;
} catch (e) {
// Network error - fall back to cached state with caution
return false; // Or implement grace period
}
}
}
import 'dart:convert';
import 'dart:typed_data';
import 'package:flutter/services.dart';
class ModelProtectionService {
/// Load encrypted model
Future<Uint8List> loadProtectedModel(String modelName) async {
// Load encrypted model from assets
final encryptedData = await rootBundle.load('assets/models/$modelName.enc');
// Get decryption key from secure source
final key = await NativeSecrets.getModelDecryptionKey();
// Decrypt in memory
final decrypted = _decryptModel(encryptedData.buffer.asUint8List(), key);
// Clear key from memory
// (In Dart, we can't guarantee this, but minimize exposure)
return decrypted;
}
Uint8List _decryptModel(Uint8List encrypted, String key) {
// Use AES-256-GCM for authenticated encryption.
// The first 12 bytes are the IV; the remainder is ciphertext + auth tag.
final keyBytes = base64Decode(key);
final iv = encrypted.sublist(0, 12);
final ciphertext = encrypted.sublist(12);
// Decrypt using the pointycastle package (add to pubspec.yaml):
// pointycastle: ^3.7.4
// See https://pub.dev/packages/pointycastle for full AES-GCM example.
final cipher = GCMBlockCipher(AESEngine())
..init(false, AEADParameters(KeyParameter(keyBytes), 128, iv, Uint8List(0)));
return cipher.process(ciphertext);
}
}
// Approach 1: All logic on client (vulnerable)
class PricingCalculator {
double calculatePrice(Product product, User user) {
double basePrice = product.price;
// Complex pricing logic that could be reverse engineered
if (user.isPremium) {
basePrice *= 0.85; // 15% discount
}
if (user.loyaltyYears > 5) {
basePrice *= 0.90; // Additional 10% discount
}
// ... more business rules
return basePrice;
}
}
// Approach 2: Logic on server (secure)
class PricingService {
final ApiClient _api;
Future<double> getPrice(String productId) async {
// Server calculates based on user context (from auth token)
final response = await _api.get('/pricing/$productId');
return response.data['price'];
}
}
# 1. Check for exposed strings
strings lib/arm64-v8a/libapp.so | grep -E "api|key|secret|token|password"
# 2. Use apktool to unpack and inspect
apktool d app-release.apk -o unpacked/
grep -r "apiKey\|secretKey" unpacked/
# 3. Use jadx to check for exposed Java/Kotlin code
jadx -d output/ app-release.apk
# 4. Use Ghidra for deep binary analysis
# (GUI tool - import libapp.so and analyze)
# 5. Test with Frida
frida -U -f com.yourapp.app -l test_hooks.js
# .github/workflows/security-check.yml
name: Security Scan
on: [push, pull_request]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Flutter
uses: subosito/flutter-action@v2
with:
channel: stable
- name: Build Release APK
run: |
flutter build apk --release --obfuscate --split-debug-info=./debug-info
- name: Run MobSF Scan
run: |
# Install and run MobSF CLI for static analysis
# See https://github.com/MobSF/Mobile-Security-Framework-MobSF
pip install mobsf
mobsf --scan build/app/outputs/flutter-apk/app-release.apk --type apk
- name: Check for hardcoded secrets
run: |
# Unpack APK
unzip -q build/app/outputs/flutter-apk/app-release.apk -d ./unpacked
# Search for potential secrets
if strings ./unpacked/lib/arm64-v8a/libapp.so | grep -iE "api.?key|secret|password|token" | grep -v "keystore"; then
echo "⚠️ Potential hardcoded secrets found!"
exit 1
fi
- name: Verify obfuscation
run: |
# Check that class names are obfuscated
if strings ./unpacked/lib/arm64-v8a/libapp.so | grep -E "UserService|PaymentProcessor|AuthManager"; then
echo "⚠️ Obfuscation may not be effective!"
exit 1
fi