OWASP Top 10 For Flutter – M6: Inadequate Privacy Controls in Flutter & Dart
Last updated
Was this helpful?
Last updated
Was this helpful?
Welcome back to our deep dive into the . In earlier parts, we tackled , , , , and , each a critical piece of the mobile security puzzle.
In this sixth article, we focus on M6: Inadequate Privacy Controls, a risk that lurks not in broken code or cracked crypto, but in how we collect, use, and protect user data.
This article isn’t just about avoiding legal trouble; it’s about writing Flutter apps that respect user privacy by design. We’ll start by defining what OWASP means by “Inadequate Privacy Controls,” then dive into practical Dart/Flutter scenarios where privacy can slip: from apps over-collecting personal info, to leaky logs, unchecked third-party SDKs, and more.
Let's get started.
Privacy in mobile apps revolves around protecting PII (Personally Identifiable Information) , data that can identify an individual. This includes sensitive information like:
Names
Addresses
Credit card details
Email addresses
IP addresses
Health, religious, or political information
Inadequate privacy controls occur when apps fail to properly collect, store, transmit, or manage this data, making it vulnerable to unauthorized access, misuse, or disclosure. Attackers can exploit these weaknesses to steal identities, misuse payment data, or even blackmail users, leading to breaches of confidentiality, integrity, or availability.
The consequences of inadequate privacy controls are generally two aspects:
Technical Impact: While direct system damage may be minimal unless PII includes authentication data, manipulated data can disrupt backend systems or render apps unusable.
Business Impact: Privacy breaches can lead to:
Financial Damage: Lawsuits and penalties can be costly.
Reputational Harm: Loss of user trust can drive users away.
Further Attacks: Stolen PII can fuel social engineering or other malicious activities.
For Flutter developers, prioritizing privacy is not just a technical necessity but a legal and ethical obligation.
Now that we understand the implications of inadequate privacy controls, let’s examine the common scenarios in which Flutter apps can encounter M6 issues.
One of the most pervasive privacy issues is simply collecting too much personal data or more details than necessary. Every extra field or sensor reading is a liability if you don’t need it. Common signs of over-collection in Flutter apps include asking for broad permissions or data your app’s core functionality doesn’t require, or sampling data more frequently than needed.
For example, suppose we have a fitness app that wants to track runs. A bad practice would be to request continuous fine-grained location updates even when the app is in the background, and to collect additional identifiers “just in case”:
A better approach is to apply data minimization and purpose limitation: only collect what you need, when you need it, and at the lowest fidelity that still serves the purpose. For instance, if the app only requires the total distance of a run or an approximate route, it could request location less frequently or with reduced accuracy, and it certainly shouldn’t include static personal identifiers with every data point:
Here we’ve reduced accuracy to “medium” and added a distanceFilter
so we only get updates when the user moves 50+ meters. We also avoid attaching the user’s email or device ID to every location update – if the backend needs to tie data to a user, it can often use an opaque user ID or token server-side rather than the app bundling PII in each request. We also check a consent flag (locationSharingConsented
) to ensure the user allowed sharing this data (more on consent later).
Is each data element truly necessary for the feature to work (e.g., do we really need the user’s birthdate for a fitness app login)?
Can we use a less sensitive or less precise form of the data (e.g., using coarse location or zip code instead of exact GPS coordinates, or anonymizing an ID by hashing it)?
Can we collect data less often or discard it sooner (e.g., update location every few minutes instead of every second, or delete old records after 30 days)?
Can we store or transmit an aggregate or pseudonymous value instead (e.g., store that a user is in age group “20-29” instead of storing their full DOB)?
Did the user consent to this data collection and are they aware of how it will be used?
Closely related to over-collection is the failure to enforce purpose limitation, using data in unexpected ways or without permission. In practice, this often means not honoring users' privacy choices. If your app has a toggle for “Send Anonymous Usage Data” or a user declines a permission, those preferences must be respected in code. Failing to do so isn’t just bad UX; it’s a privacy violation.
Consider an example of an e-commerce Flutter app that includes an analytics SDK. A user, during onboarding, unchecks a box that says “Share usage analytics.” However, the app’s code neglects to disable analytics collection:
Now, let’s correct it. We’ll respect the user’s preference and scrub PII from analytics events. We can disable Firebase Analytics collection entirely until consent is given, and exclude personal details from events:
Beyond analytics, purpose limitation means using data only for what you told the user. If they gave your app access to their contacts to find friends in the app, don’t use those contacts for marketing later. Much of Flutter comes down to developer discipline: keep track of why each permission or piece of data was requested. Document it, and audit your code to ensure you’re not repurposing data elsewhere.
There are different ways that PII can be leaked in a Flutter app. Let's start with the most common one: Logging.
PII in Logging and Error Message
Logging is a double-edged sword. We developers rely on logs, printouts, and crash reports to diagnose issues. But if we’re not careful, those same logs can become a privacy nightmare. Leaky logging is such a common pitfall that OWASP explicitly calls out scenarios where apps inadvertently include PII in logs or error messages. Those logs might end up on a logging server, in someone’s console output, or exposed on a rooted device – all places an attacker or even a curious user could snoop.
A classic example is printing out user data for debugging and forgetting to remove or guard it. Consider this Flutter code that logs a user’s sign-in information:
Never log sensitive info in production, and sanitize error messages. In Flutter, you have a few strategies:
Even better, use the dart:developer
log()
function with appropriate log levels. Unlike print
, the log()
the function can be configured to be a no-op in release mode. Flutters log()
won’t print to the console in release builds by default, preventing accidental info leakage in production.
Scrub exception messages if they might contain PII. For instance, if you catch an error from a failing API call that includes the request URL and query parameters, consider removing query parameters that might have PII (we’ll talk about avoiding PII in URLs next).
Let’s refactor the above login code with these practices:
In a real app, you might integrate with a logging backend or Crashlytics, ensure that what you send doesn’t contain secrets or personal data. Many crash-reporting SDKs let you set user identifiers or attach metadata; if you do, use an opaque ID or a hash instead of plain emails or names.Flutter’s MaterialApp
has a debugPrintCallback
and other places to intercept logs. Here is an example:
Sanitizing On-Screen Error Messages
Displaying raw server error messages to users can inadvertently expose sensitive internal information such as database schema details, internal IP addresses, stack traces, API keys, or even portions of personal identifiable information (PII) if the error message includes user-specific data. This is a direct information leakage vulnerability.
Before presenting them to the user, you must always intercept, sanitize, and customize error messages. Generic messages are safer and, in most cases, provide a better user experience.
PII in URL Parameters (OWASP Guidance)
Attaching sensitive data (like email addresses, session tokens, or user IDs) directly to URL query strings (e.g., GET /api/user?email=john.doe@example.com
) is a significant privacy and security risk.Even if this sounds unrelated to Flutter development, it's still relevant, and if you cannot avoid it, you should inform your team about it.This data can end up in:
Server Access Logs: Web servers typically log the complete URI of every request, including query parameters.
Browser History: If accessed via a webview, the URL with sensitive data could be stored in the device's browser history.
Analytics Referrers: If a user navigates from your app to an external site, the full referrer URL (including query parameters) might be sent to the external site's analytics.
Shared Links: If a user copies and shares a link containing PII from a webview, the PII is leaked.
Proxies/Firewalls: Intermediary devices may log or inspect these parameters.
OWASP explicitly states: "Sensitive information should never be transmitted as query parameters."To fix this, transmit sensitive data consistently over HTTPS in the request body (for POST, PUT, and PATCH requests) or in request headers (for authentication tokens, API keys, etc.).Here is a bad example:
But by changing that to the following code, we can ensure we follow best practices:
Clipboard Leaks
The device's clipboard is a shared resource. Any data your app copies to the clipboard can be read by any other app running on the device that has permission to access the clipboard. This is a significant privacy concern, especially for sensitive information like passwords, OTP codes, credit card numbers, or personal notes. Recent Android and iOS versions have introduced warnings to users when an app reads the clipboard, increasing user awareness and concern about this behavior.
The best practices in this regard are usually:
Avoid Automatic Clipboard Usage: If possible, avoid automatically copying sensitive data to the clipboard.
User Consent/Action: If clipboard copy is necessary (e.g., "Copy OTP"), make it an explicit user action (e.g., a button tap).
Clear Clipboard: For extremely sensitive, short-lived data like OTPs, consider clearing the clipboard programmatically after a short, reasonable interval (e.g., 60 seconds). This prevents the data from lingering indefinitely.
The _checkClipboardContent
function in the example is for demonstration purposes only to show what could be read. In a real app, you would not display the raw clipboard content to the user or log it unless it was part of a particular, secure feature.
Static Analysis and Mobile Security Scanners
Even with careful coding, potential data leaks or security misconfigurations can be easy to overlook, especially in larger projects or when multiple developers are involved.Static Analysis (Linters)Flutter projects often use lint
rules. You can enable the avoid_print
lint in your analysis_options.yaml
file.
Hardcoded secrets: API keys, passwords, tokens.
Insecure data storage: Unencrypted sensitive data on the device.
Usage of risky APIs: APIs known for privacy concerns (like unencrypted network calls).
Sensitive information in logs: Although harder to detect dynamically, some scanners might flag excessive logging or specific patterns.
Enabled debug flags: Identifying if debug features were left enabled in production.
There are a few examples of Mobile Security Scanners, including:
While these tools are beyond the scope of a simple Flutter project setup, integrating them into your CI/CD pipeline can significantly enhance your app's security posture and help catch issues that static analysis might miss.
If your Flutter app stores personal data on the device, you must treat that data as potentially accessible to attackers. Mobile devices can fall into attackers’ hands physically, or the user might have a rooted/jailbroken phone, or malware might be on the device.
Inadequate privacy control in storage means storing PII in plaintext on disk, failing to encrypt sensitive info, or not using the platform’s secure storage facilities. It can also mean not controlling whether that data gets backed up to the cloud.Let’s illustrate a bad practice: storing a user’s info (say their profile details or auth token) in plain SharedPreferences or a file:
By default, data stored via shared_preferences
on Android, it ends up in an XML file in the app’s internal storage, which is sandboxed per app. iOS stores it in NSUserDefaults (also within the app sandbox). While the sandbox offers some isolation, it’s not foolproof; an attacker can read those files on a rooted Android device. Those files might be uploaded to cloud storage if the device is backed up (Android’s auto-backup or iCloud backup on iOS).
The better practice is to use secure storage for sensitive data and explicitly limit what gets backed up. Flutter provides the flutter_secure_storage
package, which, under the hood, uses iOS Keychain and Android Keystore to store data that is encrypted at rest.
Encrypting Larger Data
You cannot rely solely on secure enclaves for larger datasets containing PII (e.g., a cached user profile, a collection of sensitive notes, or medical records) due to their size limitations. Instead, you must implement your encryption:
Encryption Algorithms: Use strong, industry-standard encryption algorithms like AES (Advanced Encryption Standard).
Key Management: The encryption key itself needs to be securely stored. This is where flutter_secure_storage
comes back into play: you can generate a random AES key and store that key in flutter_secure_storage
, then use it to encrypt/decrypt your larger data stored in regular files.
Packages: While you can use Dart's PointyCastle
for fine-grained control, packages like encrypt
(a more common and user-friendly wrapper) simplify the process.
Here is a conceptual example of encrypting and decrypting data:
The goal is never to leave human-readable personal information (PII) or other sensitive data lying around unencrypted on the device's file system or in SharedPreferences
.
Backup Concerns (Android)
Android's default auto-backup feature can automatically back up application data to Google Drive for devices that use this service. This includes SharedPreferences
and files stored in specific app-specific directories. While convenient for users, it poses a significant privacy risk if sensitive data is unintentionally backed up.
As OWASP M6 Guidance indicates clearly: Explicitly configure what data is included in backups to avoid surprises.
There are two solutions that you might want to follow:
a. Disabling Auto-Backup Entirely (android:allowBackup="false"
)
The simplest way to prevent sensitive data from being backed up is to disable auto-backup for your entire application.Edit android/app/src/main/AndroidManifest.xml
:Locate the <application>
tag and add the android:allowBackup="false"
attribute:
This is the most straightforward approach for apps handling sensitive data where you don't want any data backed up by Google Drive's auto-backup.
b. Selective Backup (Opting out specific files/directories)
If you need some data to be backed up but want to exclude sensitive files, you can:
Android's NoBackup
Directory: Android provides a special directory, accessible via Context.getNoBackupFilesDir()
, whose contents are not backed up. Flutter does not directly expose this via path_provider
. You would need to use platform channels to access this directory from Dart and then save your files there.
Custom Backup Rules: For more granular control, you can provide a custom android:fullBackupContent="@xml/backup_rules"
attribute in your AndroidManifest.xml
and define an XML file (res/xml/backup_rules.xml
) that specifies which directories or files to include/exclude.
This is more complex and generally only needed if you have a mix of sensitive and non-sensitive data that should be backed up. For most security-conscious apps, android:allowBackup="false"
is sufficient.
c. android:hasFragileUserData
Flag
This manifest attribute (if set to true
) tells Android that your app contains sensitive user data. If the user uninstalls the app, the system will offer the user the choice to retain the app's data. This data can then be restored if the app is reinstalled.
Counterintuitive: You might think "fragile" data would be auto-deleted, but the opposite is true: it gives the user the choice to keep data.
Privacy Implications: For sensitive apps, you generally do not want data hanging around after uninstall. If hasFragileUserData
is true
, and a malicious app with the same package name is later installed (e.g., after the user uninstalls your app), it could potentially claim that leftover data.
Recommendation: For privacy, explicitly set this flag based on your intent.
android:hasFragileUserData="false"
(or omit it, as false
is often the default)
This tells Android that its data should be removed when the app is uninstalled. This is generally the preferred setting for apps handling sensitive information.
android:hasFragileUserData="true"
: Only set this if you have a strong, user-centric reason to allow users to retain data on uninstall (e.g., large game data, extensive user-created content). Ensure users are informed.
Edit android/app/src/main/AndroidManifest.xml
:
In general, for sensitive apps, opt to clean up data on uninstall by either setting android:hasFragileUserData="false"
or by relying on the default false
behavior if you are also disabling allowBackup
.
Backup Concerns (iOS)
On iOS, files stored in your application's Documents
directory are backed up to iCloud by default. This is similar to Android's auto-backup and poses a privacy risk for sensitive data.Essentially, the best practices are:
Exclude from Backup: For sensitive files, mark them with the NSURLIsExcludedFromBackupKey
attribute. This requires platform-specific Objective-C or Swift code interacting with the iOS file system APIs.
Temporary Directory: Store truly temporary files that don't need to persist across launches or backups in NSTemporaryDirectory()
. In Flutter, getTemporaryDirectory()
from path_provider
maps to this.
Here is a conceptual example (iOS platform-specific code via Platform Channels):
This is a one-time configuration, typically done when setting up your project, but it's crucial for preventing unintentional data leaks through backups.
It's easy to leak user data through third-party SDKs and Flutter plugins unintentionally. These external libraries often collect data you might not be aware of, impacting user privacy and your app's compliance.
Here's a concise guide to managing data exposure from third-party components:
Many plugins wrap native SDKs for features like analytics, crash reporting, advertising, or social login. These SDKs might automatically collect device information, user identifiers, or even sensitive data without your explicit code telling them to.
The common data collectors used in Flutter development are:
Analytics/Crash SDKs (e.g., Firebase, Crashlytics) often collect device model, OS, and app version. Be careful if you set user IDs or if crash logs contain PII.
Advertising SDKs (e.g., AdMob, Facebook Audience Network): Collect device advertising IDs (GAID/IDFA) and potentially location for targeted ads. On iOS, IDFA requires a user prompt; on Android, respect the user's "Limit Ad Tracking" setting.
Social Login SDKs (e.g., Google Sign-in, Facebook Login): Retrieve profile info (name, email) for login. Ensure they don't track usage beyond that.
UX/Performance Tools (e.g., session replays): Can record user interactions, potentially including sensitive data entered into forms.
Treat third-party SDKs as extensions of your app’s privacy surface. Configure them just as carefully as you write your code. The user will hold your app responsible if their data is misused, regardless of whether it was your code or a library. So you must take responsibility for what plugins do. Keep SDKs up-to-date, too; they often release updates to improve privacy (or security). Consider ditching if a certain SDK proves too invasive and has no way to mitigate.
The first is straightforward: always use HTTPS for API calls that include PII. Never send info like passwords, tokens, or PII over unsecured channels. If you use WebViews or platform channels, apply the same rule (e.g., if loading a URL with query params, ensure it’s https and free of PII as discussed). If your app transmits extremely sensitive personal data (health records, financial info), consider an extra layer of encryption on the payload in addition to TLS – this is defense-in-depth in case the TLS is terminated somewhere you don’t fully trust. For example, some apps encrypt specific fields with a public key so that only the server can decrypt, even if the data passes through intermediate systems.
The second – sending data to the wrong place – could be as simple as accidentally logging PII to an analytics server when it was meant to go to your secure server, or having a misconfigured endpoint. Always double-check that personal data is only sent to necessary endpoints. This is more of a quality control issue. Still, it has privacy implications if you accidentally send user info to a third party when you intended it for your server.
We won't repeat this because we covered network security in detail in M5. Remember that inadequate privacy controls can manifest as plaintext communication or unintended broadcasts of PII. If you use Bluetooth or other local radios to transmit data (e.g., sending health data to a wearable), ensure those channels are also encrypted and authenticated.
Privacy isn’t a one-time setup—it's an ongoing effort. Continuously verify that your app stays aligned with best practices:
Privacy-focused code reviews: For every new feature, ask: Are we collecting new data? Do we need it? How is it stored or logged? Use a checklist like the one in Section 1 and OWASP guidelines.
Automated checks: Enable lint rules (e.g., avoid_print
) and write custom linters or unit tests to catch risky patterns. Add mobile security scanners to your CI pipeline to detect insecure storage or excessive permissions.
Dynamic testing: Run your app on rooted emulators to see if sensitive files (e.g., /shared_prefs
) are protected. Use ADB backups or Auto Backup extractions to check for unintended data exposure.
Network inspection: Use a proxy in a test environment to verify that no PII is sent in plaintext. Test opt-out settings to ensure no data flows when disabled.
Privacy audits: Regularly review what data you collect, why, where it's stored, and who has access. This simplifies privacy policy updates and user data requests.
Dependency vigilance: Monitor package changelogs for changes in data handling. The Flutter ecosystem moves fast—stay informed.
User trust: Be transparent in your privacy policy and UI. Hidden data collection erodes trust.
Threat modeling: Think like an attacker—how could they access user data? Use that insight to fix weak spots in advance.
To make it simpler to use for your app and team, I have created this simple checklist for Privacy Controls:
[ ] Minimize PII: Collect only essential data; offer clear opt-in options.
[ ] Secure Storage: Use flutter_secure_storage
for sensitive data (tokens, keys). Encrypt larger sensitive data.
[ ] Secure Transmission: Enforce HTTPS/TLS for all communications. Consider payload encryption for extremely sensitive data.
[ ] User Consent: Implement clear consent dialogs and transparent privacy policies.
[ ] Permission Management: Use permission_handler
with clear explanations for permission requests.
[ ] Anonymization: Hash or tokenize sensitive data whenever possible.
[ ] Secure Logging: Exclude PII from all application logs (use debugPrint
and sanitize error messages).
[ ] Compliance: Adhere to relevant data privacy regulations (GDPR, CCPA, etc.) and app store policies.
[ ] Testing & Audit: Conduct Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and regular privacy audits.
Protecting user privacy is a fundamental responsibility for Flutter and Dart developers. Inadequate privacy controls (M6) pose significant risks, from data breaches to legal penalties, but they can be mitigated through careful design and robust security practices. Developers can build Flutter apps prioritizing user trust and safety by minimizing data collection, securing storage and communication, obtaining user consent, and complying with regulations.
As you develop your next Flutter app, keep privacy at the forefront. Use the tools, practices, and checklist provided here to ensure your app meets functional requirements and upholds the highest standards of user privacy. The following article in this series will explore M7, continuing our journey through the OWASP Mobile Top 10 for Flutter.
Legal Violations: Non-compliance with regulations like GDPR (), CCPA (), PDPA, PIPEDA (Canada), or LGPD (Brazil) can result in hefty fines.
Data minimization questions: A good rule of thumb is to ask yourself (or your team) a series of questions about every piece of PII your app handles, :
This code is problematic. It prints the user’s email and even their password (!) to the console. In a debug build, that’s already risky if you share logs; in a release build, print
still outputs to device logs (for Android, via Logcat), which can be read by other apps on rooted devices or via adb in many cases. The token is also sensitive. And even in the catch, we log the email again. If this app uses a crash reporting service, those print
statements might be collected and sent to a server or shown on a support technician’s dashboard. database exceptions or other errors can accidentally reveal PII, too (for example, an SQL error showing part of a query with user data). So, it’s not just our code but any exception message that could leak info.
Use built-in :
Under the hood, it wraps prints in if (kDebugMode) { print(...); }
. This ensures you don’t execute those logs in release builds.
Use a logging package that supports levels (like info, warning, error) and configure it to omit info/debug in release. Popular or other products can do this. At minimum, avoid printing PII at the info level; if something is truly sensitive (passwords, tokens), you should never log it, even in debug. If needed for debugging, log a placeholder like password: ******
or hash it.
In this example, the _fetchDataWithPotentialError
function catches potential HTTP errors. Instead of directly displaying response.body
which might contain sensitive details like {"error": "Database query failed for user ID 12345", "details": "SELECT * FROM users WHERE id='12345'"}
it provides a generic, user-friendly message while logging the full error for developers to investigate.You can also check the "" article on Talsec.
Always use , http.put
, or http.patch
with a body
for sending sensitive user data, ensure your API endpoints enforce HTTPS for all communications.
Why it helps: In release builds, print()
statements are not automatically stripped and can still write to the system console (e.g., logcat
on Android, on iOS/macOS), which anyone with debugging tools or physical access to the device can access. This means sensitive data logged within release mode is potentially leaked. debugPrint()
is preferred as it's throttled and primarily optimized away in release builds.
Mobile Security ScannersFor more in-depth analysis, consider using mobile application security testing () tools, also known as mobile security scanners. These tools can analyze your compiled app binaries (APK for Android, IPA for iOS) to identify potential vulnerabilities, including:
An open-source, automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis, and security assessment framework capable of performing static and dynamic analysis. It's highly recommended for its comprehensive feature set.
: A commercial platform offering mobile application security testing, often used for larger organizations or continuous integration.
: Another commercial solution for automated mobile application security analysis.
This overlaps with OWASP but it’s worth briefly mentioning in the privacy context.If you’ve followed the guidance from , your app should already be using HTTPS/TLS for all network calls and avoiding eavesdropping risks. From a privacy standpoint, two specific concerns are: not encrypting sensitive data in transit and sending data to the wrong destination.
As Flutter developers, ensuring user data privacy isn’t just about collecting the minimum necessary information and encrypting it; it’s also about monitoring and responding to potential threats that could compromise privacy in real-time. This is where tools like come into play.
is a powerful tool for detecting and mitigating various types of security threats, including tampering, reverse engineering, debugging, and data leaks, all of which can lead to privacy violations. By integrating freeRASP into your Flutter app, you can proactively detect any suspicious activity that might put your users' data at risk, helping you ensure compliance with privacy regulations like GDPR and CCPA.
Key Privacy Risks Addressed by
freeRASP is designed to monitor various threats that could directly impact user privacy. Below are a few common risks that helps mitigate:
Rooted/Jailbroken Devices: When attackers gain control of the device, they can bypass security measures and access sensitive data. can detect if a device is rooted (Android) or jailbroken (iOS), which is a significant privacy concern.
Debugging and Reverse Engineering: Debuggers and reverse engineering tools (e.g., Frida, Xposed) can manipulate the app’s code and access personal data. detects the presence of such tools in real time.
Tampered Apps: If an attacker modifies the app’s code (repackaging), they can introduce vulnerabilities, such as sending user data to unauthorized third-party servers. protects by detecting changes to the app’s integrity.
Insecure Device Storage: Storing sensitive user data in an insecure manner (e.g., unencrypted) can lead to data leaks, especially if the device is compromised. helps ensure that sensitive data is stored securely and inaccessible to unauthorized entities.
Simulators and Emulators: Testing apps on simulators and emulators can sometimes expose sensitive data, as these environments may not be as secure as physical devices. detects when the app runs in an emulator, helping prevent exposure during testing.
[ ] Optional: Use for runtime protection
Majid Hajian - Azure & AI advocate, Dart & Flutter community leader, Organizer, author