The article for this week is called, “Why You’re Doing Cybersecurity Risk Measurement Wrong”. As ITACS students we learn that assessing risk aren’t always completely straightforward and are more complex than they appear. I chose this article because I thought that it would be extremely helpful and beneficial, as it provides 5 productive ways in measuring risk. The article uses modeling principles and impact using cost estimation and the CIA method from Hubbard & Seiersen book, How to Measure Anything in Cybersecurity Risk.
The author states, “Risk mitigation should account for how attackers will evolve. If you’re facing a persistent threat with a lot of resources, and one attack is unsuccessful, you should anticipate that the persistent threat will evolve their TTP (Techniques, Tactics and Protocols). If you attempted to mitigate the risk of banking trojans served by botnets but failed to account for the evolution to ransomware, your risk model was probably faulty.”
I completely agree with that statement and how he mentions that hackers are constantly evolving and if we want to combat those risk, we must evolve as well. The article breaks down into 5 recommendations in getting risk management right.
- Gather threat intelligence and data about the behavior if your users: By doing so you’re able to prepare for certain attacks and predict future attacks. Examples of data: behavioral analytics from logs, defining groups of users and observe the operation of that user group.
- Don’t reveal to miscreants how they were detected if you can help it: By not revealing your secret they will not be able to react to it.
- Be deliberate in how you publicize risk mitigations in your organization: Hiding risk mitigation in your organization would prevent a change in behavior, it might be unethical or prevent you from getting credit for your work. A better solution is to emphasize to users and decision-makers what risks still exist to help them make informed decisions that reduce risky behaviors.
- Be deliberate in how you share information externally: “Risk mitigations implemented by other organizations may also change the behavior of miscreants. If you’re selective with whom you share data, or share along with guidance on how it should be handled, there’s less of a chance of others being careless, and causing unexpected miscreant behavior changes. If you share publicly, account for uncertainty created by a likely change in attacker TTP”.
- Don’t spread FUD: “FUD (Fear, Uncertainty and Doubt) is incorrect data that causes improper risk or uncertainty measurement. Some people spread FUD because of sloppy work, some do it unintentionally, and some do it deliberately for business reasons. It’s bad for the cybersecurity community/industry as a whole, it’s bad for decision makers, and it’s counterproductive in the long run.”
I hope this article allows some students to walk away with a different perspective on approaching risk assessments. I found the article very helpful and will most definitely be using these recommendations in the field.