Philosophizing security in a mobile-first world
Last updated
Last updated
Company
General Terms and ConditionsCybersecurity has been a global problem for several decades. However, we are still in a phase where the exponential growth of security exploits leads to significant financial losses for businesses and individuals.
The size of the cybersecurity market was $3.5 billion in 2004. It is approaching $150 billion in 2021 and is expected to reach USD 352.25 billion by 2026. Regardless of such tremendous investments in this domain, the overall situation with security seems to be getting worse. So why isn’t the vast number of available technical solutions solving the cybersecurity problem?
Ifwe ask an average technologically-minded manager in the security sector, we will get a simple answer. It is due to the low adoption of various brilliant tech solutions available on the market. Maybe… But isn’t this an over-simplified approach?
Suppose we want to radically improve cybersecurity and create a solution to gain mass user adoption. How would we get there? We must elaborate on the subject from different perspectives and go beyond the pure engineering view of security. Why?
One of the obstacles is that engineering-minded people tend to project the confusion in the problem statement onto users and explain the low adoption of security solutions by users’ ‘immaturity.’ On top of that, engineers quite often fall into an observation bias called the streetlight effect.
It is common for engineers to narrow the problem domain to a smaller segment that is more “comfortable” for research. They usually frame a problem, having in mind an “elegant” solution based on a “known” or “proven” technical framework. Meanwhile, the “core issue” may remain in the dark.
Technically minded people hate unclarity and vague problem statements. And it makes sense; we have to admit that otherwise, it is much harder to estimate and commit to the time needed for implementing the solution.
To tackle these problem statement misconceptions, we need to understand the difference between objective security issues and how users experience or perceive these problems. Remember the famous quote by Einstein “If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.” Generally speaking, we can’t just leave the problem statement fully defined by engineering-minded people. We need to make sure that we address the root problems in the security domain and verify if the selected methods are relevant.
The best approach to combat biases and go to the core of the issue is philosophizing. For example, let’s take a closer look at security through the eyes of mobile app users.
Usually, the word “security” has a positive connotation for engineers. A good solution is a secure solution. Isn’t it so, dear CTOs?
While users often have ambivalent feelings about security, most have just run into too many unpleasant situations due to security measures. Have you ever blocked your payment card by incorrectly typing your PIN several times? Then you probably know how annoying security measures can be.
The core reason that security evokes negative feelings is that security, by definition, implies some limitations on freedom or privacy. It is especially true if an external party has imposed such limits out of our direct control.
NB: We may not realize it but people are quite used to being limited in freedom for the sake of security. For example, most of us were limited in our activities by our parents protecting us from the dangers of the external world. Another example is luggage scans in airports that somewhat intrude on our feelings of privacy.
There is often a “good reason” behind these limits, and it is communicated to users as a tradeoff between security needs and user convenience. But still, we often feel that it is unbalanced and doesn’t work how it should.
NB: Just think for a while how many potentially good online and offline services you have stopped using just because of annoying security measures.
The notion of safety is very close in meaning to security. But safety is subjective, i.e., a person can feel secure, in contrast to security, which is presumed to be a generic and objective concept. This difference explains why safety is perceived much more positively. The conditions leading to feeling safe are individual. Individual safety conditions may even contradict security conditions. And vice versa, security measures can conflict with and harm the feeling of safety. For some people, safety would mean accepting a certain comfortable level of insecurity, taking risks, and relying on self-protection skills. For others, it would mean the delegation of security to a trusted party and sacrificing some portion of personal freedom and privacy.
There is an essential implication in the subjective quality of safety. When we feel safe, it doesn’t necessarily mean we are objectively secure, and threats are absent.
Let’s sum up and list a few conclusions we have come to by now:
Security is not equal to safety; security measures are generic while the conditions of feeling safe are individual;
Security measures could harm the feeling of individual safety when an external entity manages it. In this case, security limits the users’ freedom and privacy that can go beyond the individual comfort zone.
Practically, users desire individual safety but may not have the full picture about security threats;
Engineers usually push generic security but have limited visibility into individual safety preferences and how security measures may impact them;
Both users’ and engineering’s views are influential and should be bridged and reconciled. Though, it is much easier said than done.
Many bright minds and philosophers have tackled the Freedom and Security dilemma from the ancient Greeks till modern times. The concept of the social contract was introduced and elaborated on in the 17th-18th century (Thomas Hobbes, John Locke, Jean-Jacques Rousseau). The social contract is an unwritten rule or agreement within the society that is supposed to balance and regulate the level of acceptable compromise between personal freedom and limitations for security provided by institutions, the state, or ruling classes.
In democratic states, citizens should influence this contract through the election process. Let’s assume for now that this mechanism works fine, and we have some control over the contract… I know some of you might say: it’s imperfect, it has a time lag, it is vulnerable to populism, but it still works, and we’ve got no better mechanism so far.
But what about social contracts in our digital life? We also delegate security to some external entities (consciously or not), don’t we?
Let’s use metaphoric language to highlight the parallels and differences between digital and real-life security in a simple thought experiment.
Let’s imagine people living in the digital world like a historical real-time strategy computer game, where every person is a “natural Intellect-driven” game unit. People are surrounded by wild nature and the circumstances of a middle-age period of human history. A very insecure place to live, isn’t it?
The people organized themselves and came up with a collective defense by creating fortresses in towns and dedicated security institutions. These institutions (like states) provided security services to citizens by building fortifications to protect them from enemies while limiting their freedom by introducing rules and taboos.
NB: Keep in mind that such security institutions always had a tendency to misuse the power of the function that they were delegated (security geeks would call this a bug in the system that leads to “elevation of privilege” due to an issue in the “segregation of rights”)
Initially, citizens could go straight to the marketplace. Now they are forced to go through the main gates, pass through some identity verification process, pay taxes, and so on.
Now let’s upgrade this imaginary world, and let’s say these people live in a Mobile Apps World, where every fortress-town is an App.
NB: It is actually not an oversaturated metaphor since people spend 80% of their average online time in Apps.
In reality, Apps are designed to be executed under the supervision of an operating system in a sandbox environment and isolated from other Apps and processes. In our imaginary App world, the operating system is like a state that governs the fortress towns (Apps).
The same as in real life, the protection of the basic set of citizens’ rights is regulated by such states (operating systems). But the actual level of security inside the fortress (App) is still designed by the fortress owners (App publishers). Let’s call them governors.
NB: There would be two dominant states with quite a different regime in our App world: the iOS Kingdom and Androidian Union.
Inhabitants of such a world can quickly jump over from one fortress to another within the state. They can sell products in the marketplace of eBay, keep their fortune in the Revolut fort, and fall asleep on the dirty streets of YouTube. Just like us.
In our imaginary App world, the fortress governors must take security very seriously since it is a matter of life and death in assumed middle-age conditions. Also, they are forced to take their citizens’ safety feelings even more seriously because it directly impacts the growth of the population, which is crucial for the economic success of the fortress-town.
Thus, the more financial resources a town has, the better security measures it can afford. So there is a positive feedback loop from users’ safety feelings to security. It doesn’t work vice versa. More severe security conditions harm the population growth, limiting freedom or causing too much discomfort. This is why it is always “safety first” in our imaginary App world in contrast to “security first” that we are pretty used to in our lives.
This also explains why governors can’t fully delegate the balance between security, freedom, and comfort to their subordinates (neither the army nor merchants ). It is an existential question for a given App that is just too important to be delegated. It is directly linked to the mission and economic model of the App fortress.
NB: Thus in the real world the safety strategy should form the “right” balance between freedom, security, and comfort for a given App. Top management shall define or at least arbitrate it. It can’t fully delegate it to marketing-minded or tech-minded subordinates.
These imaginary people of the App world have an advantage over us. They can visually observe the security system of fortresses that they consider entering. They can assess the security of the fortress app by a visual evaluation of the walls, gates, towers, or soldiers’ weapons and armor. These town citizens have the luxury of having an evident reason to trust the fortress app. And these citizens can easily guess how responsible and capable the owner of this town is in terms of security.
NB: In the real world, we usually lack clarity on how an app is secured. We only rely on the app review process of the marketplaces and trust in the given brand. This makes a strong link between trust in the brand and individual safety perception of the App.
Real-world App issuers rarely go the extra mile of guiding users through the security benefits of the app and user safety best practices. So users intuitively extrapolate the overall app UX quality to its security. As a result, any glitch in the App harms the feeling of safety just because users expect that security is at the same or worse level of quality as UX. The visual communication of security features, tips, and warnings could detach the perception of security from the rest of the functionalities. This aspect would be an independent factor of comparison with the competitors in the user’s eyes.
NB: The critical advantage of communicating and visualizing the state of App security is that it can help to bridge the gap between the personal feeling of safety and the actual state of user protection.
Security warnings and tips can be presented to only those users that have flaws in their device protection according to in-App protection controls.
NB: In-App protection is a mobile security technology that allows mobile applications to check the security state of the environment that it runs within, actively counteract attack attempts, and control the integrity of the App. Such technology is also called RASP (Runtime App Self Protection) or App-Shielding.
Let’s take a brief look at the Android security statistics collected from 400K devices of mBanking users (in EU countries). To get a general feeling of what percentage of users would deserve some security-related guidance, i.e., they have some fundamental security issues.
About 21% of users ignore the Screen Lock functionality. It exposes users to the risk of misuse of Apps and data breaches if the device is lost, stolen, or used by kids.
About 38% of users don’t use biometrics (like fingerprint scan), while only 12% (48K out of 400K) don’t have this feature available in their device; biometrics is much safer than a password or PIN. Incidentally, using a biometric lock doesn’t mean that data shared with any app or the device’s backend.
1112 users of 400K (0.28% have Rooted devices.). It means that the App integrity and its isolation sandbox can be compromised either through malware or by the user himself.
381 devices ran the App in a debugger mode, and 151 devices ran an emulator. These all are signals of a reverse engineering attack if they are not in the hands of a legal development team.
226 app instances have signs of app tampering (can indicate that the App was cloned, tampered, republished by an attacker, and a clone was installed).
Our imaginary governors know very well that an efficient defense should include more elements on top of the army and fortifications. Army and walls can probably protect from brute force attacks by barbarians, but it is much less efficient against traitors (aka fraudulent users in the real world), and can’t help against diseases caused by viruses (aka malware).
Fortress app owners could also realize that involving citizens in security affairs would increase the resilience of the town to large-scale problems while making the citizens more loyal and personally engaged.
That is why in the imaginary App world, governors should be very creative with educational activities, training, or performances explaining critical safety practices like how to detect scams and fraudsters, recognize suspicious activities of strangers, and hygiene rules to prevent epidemics.
So what I am pointing out is that businesses will implement user cybersecurity education and its visualization as an integral part of both the Security Journey and user loyalty programs soon.
NB: Some might say I don’t feel like educating my users about cybersecurity. Yes, it is quite a common view. I guess it was the same attitude among airline management before 1984. Since then, the pre-flight safety briefing has become mandatory and we are all quite used to watching the cabin crew demo every time we are about to take off.
Every fortress owner in our imaginary world would need to get alarm messages in case an enemy army approaches his fortress. None of the governors would dare to underestimate this subject. It should be easy for citizens to send a signal, “We are under attack!”. That is why alarm bells are placed on every screen square of the app fortress 🕭.
In our real-life FinTech apps, it is bizarre, but the “Report abuse” feature is often well hidden. As if app managers prefer “it’s better not to know” that there is a leak in the hold. The most common approach is “in case of emergency, call the Hotline and enjoy the IVR music” and let the call center sort out the issue.
Many FinTech mobile app issuers would say, “well, we have a modern risk scoring and monitoring system that collects many security signals from the app like location, behavioral data of users, and many more to estimate the risk of fraud.” Indeed, many Apps use risk-based security. But the main problem of risk scoring systems is they suffer from a lack of factual information about ongoing attacks. In other words, they miss the “source of truth” of what attack vectors look like.” Risk scoring logic usually is designed based on the best knowledge of the given security team about potential attack vectors. At the same time, the creativity of cybercriminals has a much higher speed, so attack methods are changing much quicker than risk scoring logic.
On top of that, a significant portion of attacks addresses the weakest link in the security chain — humans. So it is hardly identifiable by automatic signals. So the best we can currently do is detect the scam campaign from user reports and quickly find the appropriate method to prevent it from becoming a large-scale attack.
It is known that there is nothing more efficient against a common enemy than an alliance. In our App world, the governors of cities have much more chances of survival in the aggressive middle-age world if they join their efforts, i.e., form an alliance that implements the principles of a Collective Defense. The most crucial element in that collective defense is rapid information sharing about the enemy and how it is attacking.
In our real digital world, the quick distribution of detailed information about exploits is crucial (like malware binaries, scums, and zero-day vulnerabilities). In many cases, ML-driven mechanisms can automatically prevent many problems if they are trained by the “source of truth” information about the attack. Thus every reported attack can make the whole group more resilient.
It brings us to the question of which users would prefer to delegate the surveillance role and to who would they be happy to share the information about attacks? Is it governments, operating system vendors, device vendors, corporations like banks, small app issuers, professional communities, dedicated NGOs, etc.?
Let’s wrap up the takeaways of App security we touched on during this philosophizing exercise.
Safety first, not security. Safety is about making the “right” balance between freedom, security, and comfort. This topic can’t be fully delegated to marketing-minded or tech-minded middle management since it is a strategic brand development question.
App Security measures need to be visualized and explained for end-users to be perceived as safety elements and not just a security nuisance.
Engaging users in the Security Journey through educational content, gamification, and feedback (report issue), is an efficient way to gain loyalty and prevent many security-related problems.
Attack claims are a vital feature for building countermeasures. It should be simple and intuitive.
Consider joining the AppSec community to benefit and contribute.
Author: Sergiy Ykymchuk
Co-founder of Talsec (https://talsec.app.)
Mobile Apps Security Company
P.S.
Startups generally aim to change the world with their Apps and “make an impact” and influence people. The depth of our responsibility determines our actual influence. We are designing and manifesting our future influence by setting the boundaries of responsibility that we take.