UNDERSTAND YOUR PERSONAL SECURITY NEEDS
“Caution tape” by Steve Johnson (modified) / CC BY 2.0
Instead of repeating myself and the fundamentals of information security (infosec) ad infinitum I thought it’d be prudent to write a standalone article that summarizes my take on the major concepts in infosec and how one can apply them to daily life.
Concepts are abstract, and gaining knowledge requires integrating multiple concepts together to boot, so I’ll do my best to explain them as clearly as possible. I’ll be using a number of hypothetical (but realistic) examples to make them as concrete and easy-to-grasp as I can. With a primary focus on clarity this post will cover the following six aspects of infosec:
The path to understanding your own personal security needs starts with thinking about your information, personal property, and the other important facets of your life in terms of your need for confidentiality, integrity, and availability. You may have a need for all, some, or none of these depending on the specific item. It’s also likely you’ll have varying degrees of confidentiality, integrity, and availability interests in each item, too.
Furthermore, people have individual preferences, unique value judgements, and personal standards of security, so the needs of other people, even when they’re considering the exact same item you are, can be completely different. Therefore, it’s helpful to think about the upcoming ideas in this post from your own individual perspective and with your personal needs in mind.
If a given piece of information is said to be confidential this means the information is not permitted to be made available (disclosed, published, etc.) to unauthorized parties. You can also think of confidentiality as the secrecy component of privacy, but take care to avoid conflating the idea of something being confidential with it being private.
Private, as in ownership, and any resulting states of privacy (or exclusivity of access), is a different, though related, idea altogether. Confidentiality is a distinct concept involving disclosure authorization, and the distinction becomes important when considering informational items that are not property in a traditional sense.
Let’s use some examples. The simplest place to start is information that is property. For example, do you have a personal journal or diary, and nobody else is authorized to read the contents? What about your communications? Have you sent messages to a family member or friend, but you don’t permit anyone other than that single person to see what was said? Ownership of the messages (as property) is important, but here, confidentiality specifically conveys the disclosure authorization, or lack thereof, of the communicating parties.
“Enigma Machine” by University of Manchester School of Mathematics / CC BY 2.0
There are other kinds of information to consider, too. Do you only permit your spouse and your doctor to know how much you weigh or what medications you take? By itself, this information is not property per se, but clearly one can have a confidentiality interest here. Or what about when you tell someone your phone number or email address? Do you permit them to share it with whomever they wish? You might pay for services that give you exclusive use of these identifiers, but the information that they route to your phone and your inbox itself is not necessarily property you own. Yet you still likely have a need for confidentiality here. For example, you might expect your colleagues to exercise discretion in sharing these details with others in your absence. This is another instance of disclosure authorization.
Most people have a decent grasp of confidentiality, conflation with ‘private’ or ‘privacy’ aside, but other security concepts can take more effort to learn.
I like to think of integrity, in an infosec context, as the correctness of one’s data or information. More specifically, integrity means a given informational item is accurate, complete, and free from unauthorized modification. I also find it useful to think of integrity as the reliability that one’s information will be in its expected or intended state. Integrity is distinct from the concept of availability which will be described later.
Concrete examples of integrity can help. When you apply for a job by sending your resume to a potential employer you likely have an interest in the content of the resume they receive being exactly what you sent and what you intended them to see. If the file you sent gets corrupted by a software glitch, and the words you wrote are not what gets delivered, then this is a integrity problem. Let’s say you run a business, and you keep a list of your customers’ names and mailing addresses. If you add new customer details to the list improperly, so that it’s unclear or inaccurate which address corresponds to which name, then this too is an integrity issue. Another example is if you have a website. If the contents of a given webpage are never supposed to be modified then you have an integrity interest here too. These are examples of the accuracy, completeness, and correctness aspects of integrity.
Like confidentiality, integrity also involves the concept of authorization too. Maybe your hypothetical business has employees, and you only want specific, authorized personnel to make changes to the customer list. If a person who is not an employee at your company can make unauthorized changes (additions, deletions, modifications) then this affects the integrity of the customer list too. The same is true in the website example. If your site is a personal blog and you’re the only person supposed to be making changes, but someone else modifies the contents as a form of vandalism, then this is a clear integrity violation. These examples highlight the authorization component of integrity.
Availability means a given item is accessible and is usable in its expected manner. For physical items this is typically uncomplicated. The availability of information (particularly digital info) can be more nuanced, though, since there can be many complex, interacting components (hardware, software, networks, security controls, etc.) that handle the information that must themselves function properly to make the information available in the intended way.
Examples of availability are plenty. If you own a car, and you rely on it to drive to work, buy groceries, and visit your family then you have an availability interest in your car. If the car became completely unavailable (e.g. due to theft) or if you could otherwise no longer operate it as intended (malfunction, damage, etc.) then this is an availability problem.
Let’s say you’re exclusively storing your personal address book only on your phone. In the event your phone is lost, stolen, or destroyed you will permanently lose access to the information therein. Obviously this would be ruinous to the availability of the address book. However, if you had previously made a backup copy of the data, and this backup was not affected by the loss of your phone (i.e. it was stored elsewhere), then you can restore your access to it with a new, working device. It’s important to understand, though, that even with a backup, you would have still experienced a negative availability impact nonetheless, because you were not able to access or use your address book for a period of time, and even then, you would have regained access only after procuring a new device and successfully completing the data restoration process.
Security controls can themselves impact the availability of information, too. In a sense, reduction or loss of availability is actually the goal of certain security controls. For example, if you keep paper documents in a locking file cabinet, then the requirement that one must use the key to access them reduces their availability, because no access is possible without the key. If you store items (jewelry, cash, firearms, etc.) in a safe or lockbox that requires one to enter the correct combination for it to open, then this also negatively impacts the availability of those items. Furthermore, choosing a lockbox that features a rotary dial to enter the combination verses one with a numeric keypad can also affect item availability, because the rotary dial might cause the unlocking process to take relatively more time to complete.
There are non-physical security controls, too. In order to access your email account, you must know and enter your account password. Forgetting your password, or simply losing the capability to enter your password (e.g. broken computer keyboard) makes your account inaccessible thereby negatively affecting its availability to you.
It’s worth mentioning that you can also have availability interests in items that you might not regard as ‘usable’ or items that don’t grant you any productivity-focused capabilities, too. For example, you might own sentimental items, like photos or messages from a departed loved one, that have emotional significance. Just because they may not correspond to a ‘use case’ like other examples does not mean you lack a very real availability interest and need to preserve your access to these items as well.
Take a moment to consider a few items in your life, physical items as well as informational ones, and evaluate your own confidentiality, integrity, and availability needs for each.
Hazards & Threats
It’s wildly common to equivocate hazards, threats, and risks, but in order to fully understand your security needs, you need a clear understanding of each concept and an appreciation for how they differ from one another.
A hazard is a state or condition that could potentially cause harm to occur. The presence of a hazard does not imply that any harm has occurred or will occur nor does it convey how probable it is for the harm to occur in the future. A hazard is simply the source of a possible harm. In common language, we say something is hazardous when it is capable of causing harm (e.g. hazardous waste), and we sometimes also specify the nature of the harm a hazard could cause with phrases such as ‘tripping hazard’ or ‘fire hazard’.
In a cybersecurity context, a software vulnerability is a kind of hazard. The presence of a bug in a program’s code, a bug that can be successfully abused, is a possible cause of harm to the computer that runs the program, and more broadly, could cause harm with respect to the computer owner’s confidentiality, integrity, and availability interests.
A threat, on the other hand, is a state or condition wherein a harm, arising from a corresponding hazard, occurs, is realized, or is otherwise ‘activated’ in some respect. Unlike a hazard, a threat means the harm is not just possible but that it is actually being experienced or that the harm is underway.
Let’s use the example of living near an active volcano. Even if the volcano is currently dormant it is considered active if it has erupted at least once in the last ~10,000 years, and you should probably regard it as a hazard since it could cause you bodily harm or destroy your property if it erupts. While dormant, an active volcano is a hazard to you and your home, but when it is actually erupting it is instead a threat.
Now let’s revisit the car example. We said you had an availability need regarding your hypothetical car, because you rely on it for work, food, and social activities. If you parked this car under an old, dying tree with large, heavy branches then the tree is a probably a hazard since a branch or the whole tree itself could fall and crush your car rendering it unusable and therefore unavailable. What about threats to its availability? A street criminal could steal your car especially if it’s parked outside, the doors are unlocked, and it has no alarm. Leaving your car outside in this fashion would be hazardous, but there would only be a threat at play if there were actual motor vehicle thieves at large (i.e. actual thefts happening) in your neighborhood or if you had some other specific reason to believe your car in particular will be stolen by someone.
In information security, the phrase ‘threat actor’ is used to describe one or more individuals actively harming your security interests. In cybersecurity, hackers are the threat actors at play, and they have varying motives, objectives, capabilities, and tactics. A specific hacker or hacker group is a threat to an organization insofar as they harm the organization’s security interests, and they often do so methodically in small, less-noticeable ways over long periods of time to avoid detection. This complicates the process of uncovering if a cybersecurity threat (not just hazards like vulnerabilities) exists, because it’s often difficult to identify that any harm is occurring until it’s too late.
Knowing Your Risks
Risk conveys how likely it is that a hazard will become a threat (i.e. actual harm will occur) and, if it does, the severity of that harm. Assessing your own personal security risks requires that you know:
- Your security needs (confidentiality, integrity, and availability interests)
- The hazards that could affect your needs
- The likelihood of those hazards actually causing you harm
- The severity of that harm if incurred
You might be the only person capable of assessing your own individual security risks, because doing so requires estimating likelihood and severity in the full context of your unique life and goals. The best advice I can offer is to be objective and think rationally when estimating your risks by basing your conclusions on evidence and sound logic. Be honest and realistic with yourself about which threats you do and do not face.
From my own observations, most people are not necessarily singled out in a targeted fashion by hackers, and when they are, it’s often true that hackers are really trying to harm an organization with which that person is associated. That’s not to say most people face no meaningful cybersecurity threats. They certainly do! But just like street crime, one is much more likely to incur harm from an opportunistic threat actor (whose motives likely center around money) irrespective of who you are or the organization to which you belong.
“Stickman in pickpocket peril” by Ruth Hartnup (modified) / CC BY 2.0
Think about all the possible security hazards associated with all your various items (each item, physical or informational, can have a completely different set of hazards), and figure out which hazards are the most likely to result in harm and how bad that harm would be for you. Consider writing them down. Just focus on the two parts of risk: likelihood and severity.
If protecting a specific item seems especially important to you, or if a harm you’ve hypothesized would be particularly severe, then pay extra attention to these discoveries, and remember them when we figure out what to do about everything in the next section.
Risk mitigation means taking steps to reduce the impact of a risk by reducing the likelihood of a harm’s occurrence, by reducing the severity of the harm, or both. It’s true that you may be able to completely erase a given hazard from the equation (thereby reducing the likelihood of the harm into an impossibility), but as long as there are hazards in general there will be risks. This is why it’s said that risk is always present and that risk can only be reduced.
Let’s revisit the car example one final time. Your hypothetical car can experience a negative availability impact if it were damaged by a falling tree branch or stolen by a criminal. Which one is more likely? That depends on the state of the old, dying tree under which the car is parked as much as it depends on how prevalent motor vehicle theft is in your neighborhood. Which harm would be more severe? In terms of availability, both could result in a total loss, but either case could also result in restored availability, too, through repair or asset recovery efforts respectively. Figuring out which poses the greater risk allows you to prioritize your mitigation efforts, and that’s very valuable since everyone’s time and resources are limited.
Regardless of the prioritization, since we know the nature of the possible harms and how they could manifest, we can identify steps that can be taken to reduce their likelihood. Let’s continue with our hypothetical. Keeping your car parked on the street, but no longer under the dangerous tree, will reduce the likelihood of tree-related damage. Should you make this change? Maybe. The brainstorming and reasoning processes may help you uncover other hazards that you hadn’t yet considered, so your first plan may need revising. For example, moving your car from under the tree does not reduce the likelihood of damage from other vehicles driving on your street.
If you decide to reposition your car to avoid the tree, but you still keep it parked on the street, then you haven’t yet mitigated the risk of theft. Maybe you decided to install an alarm or to lock your car doors when leaving it unattended to deter less-sophisticated car thieves. That might reduce the likelihood of theft, but what about the severity? With theft, a total loss is possible, and since availability is your need here, it makes sense to focus your mitigation efforts toward making recovery possible. Is installing a GPS-based vehicle tracker a viable solution that reduces the severity of a possible theft? Again, at this stage, the answer is still ‘maybe’, because we still need to weigh all the possible mitigations we came up with against one another.
Some mitigating actions can address more than one risk at the same time. For example, parking your hypothetical car inside an access-controlled garage would mitigate all three risks by reducing the likelihood of:
- tree-caused damage
- motorist-caused damage
Convenient, all-in-one solutions aren’t always available, but it can save you a lot of time and effort if you look for and discover these types of fixes.
Some mitigations can reduce the likelihood or severity of harm for one specific security need while actually increasing the risk for another. Let’s go back to the locking file cabinet example. Keeping the files locked this way can reduce the likelihood of unauthorized disclosure (improving confidentiality), but doing so also increases the likelihood that authorized parties can’t access them if the key is lost (worsening availability). When weighing all the risk mitigation steps one can take, it’s very important to keep both the limits to your time and resources and all aspects of an your security needs for an item in mind. Don’t settle on a mitigation method until you’ve thoroughly analyzed all these details.
Let’s say you made a decision, and you implemented some or all of the risk mitigation steps you identified. What next? Is that enough? Typically, in a corporate environment, mitigation efforts can be measured through auditing or other testing. These try to answer the question of whether or not the mitigation meets the organization’s security standards. Many of those standards come from the company’s internal policies, but regulatory or statutory requirements can also be used as standards, too, for better or worse.
For you, as an individual person, your security needs can be used as those standards. Really, it’s as simple as that. Remember three sections ago when you evaluated your confidentiality, integrity, and availability needs for your items? When asking yourself if you’re done mitigating a given risk just think back to your needs. You’re the judge. If a step you took to mitigate a risk seemingly meets all of your possible confidentiality, integrity, and availability needs for a given item, then you’re done. If you don’t think it completely meets those needs, then continue improving on the steps you took or brainstorm better alternatives.
What are the best steps possible for you to take to satisfy your own personal security needs? It’s always possible to make a fatal error when contemplating how to best secure something, but if you can piece together your own personal puzzle of needs, risks, and mitigations, and you take action on your decisions, then your security will undoubtedly improve.
Whether you have a job in infosec, you’re considering one, or you’re just interested in improving your personal security, I hope this post demonstrated that infosec, as a field, isn’t founded on some arcane, unknowable magic. Certainly, being an infosec professional requires far more skill than merely knowing the concepts in this post (skills like applying these concepts in complicated business environments with complex technologies), but practically anyone can learn how to better secure their personal information and their lives regardless of background, education level, or career path.
Everyone has different needs and standards for the security of their personal information. You may or may not need to work on your personal security in a rigorous or academic way. But, as long as risks exist, having the cognitive tools to reason through security concepts and apply the fundamentals to daily life will be beneficial. Thanks for reading!