There’s been a fair amount of buzz about Michal Zalewski’s article entitled Security engineering: broken promises. He does a very good job of summarizing some of the open issues with security engineering. I do think he’s probably a little pessimistic, missing some opportunities to give credit and I think it’s unfair to claim security engineering has failed for not developing a unified model that can ensure security. However, he’s pulled together a lot of different facets of security engineering in a short article. The field of security engineering does need to continue to seek to eliminate vulnerabilities that are being exploited widely, and do it in an efficient manner. Much of his discussion can be generalized beyond software security to general information security.
While Zalewski didn’t address or mention APT, I’ve heard similar (but usually not so complete or well worded) rants about the failings of security best practices in regards to APT. It really pains me hear people trash security engineering, especially in the context of Aurora and similar attacks. I’ve also heard a fair amount “sky is falling” and “security best practices can’t keep you safe from APT”. Blaming security engineering for failing to stop targeted attacks doesn’t make sense when it was never a requirement of most systems. Furthermore, we don’t want security engineering alone to solve this problem anyway.
Engineering is about applying science to provide solutions that meet well defined parameters. These parameters involve all sorts of things like functionality, cost, reliability, etc. Many of these parameters are conflicting, at least apparently. Because we live in a world with scarce resources, engineering seeks to provide the optimum value for all the various parameters.
While security has some unique characteristics, it can be viewed as another parameter of a system. While I agree that if done right, security doesn’t have to be as painful as we often make it, security does often conflict with other parameters such as flexibility, cost, and functionality. As such, a wise engineer only invests as much effort in making a system secure as is required.
It amazes me that physics envy rages so strong in some people’s hearts and minds that they actually lose sight of the imperfections of both theoretical and applied physics. People who expect a comprehensive model to cover all aspects of security, much like the ever nebulous theory of everything, have a long time to wait, very possibly infinitely long, but even that is probably impossible to prove. Furthermore, many of the simple physics models are hard to actually apply in the real world due the many different phenomena that need to be modeled simultaneously. The massless, frictionless, point objects that we hear so much of in physics exercises must only exist in a vacuum, because I’ve never seen them. Practical application of physics isn’t as easy as the models often make it appear. That’s alright though. Using classic Newtonian physics works in a great many situations and helps me understand the world around me. Scientific models always have limitations in their applicability, but that doesn’t negate their value. While it quite often literally requires a group of rocket scientists, very often using a mix of multiple models and simulations, we’ve been able to do a great many things based on physics without having a theory of everything.
Success of Security Engineering
Formal methods are hard. Formally verifying a system is only practical on the most simple of systems. However, it has been done. Ex. flight control systems or highly secure classified systems. We refrain from formally verifying all systems we build not because we can’t or don’t know how, but because it’s just too hard. It requires too much effort and restricts the functionality and flexibility of the resulting systems too much for most people’s tastes. Most people couldn’t, or at least wouldn’t want to do their day to day work on one of these highly verified, and therefore, highly restricted systems.
While not my favorite, using risk mitigation strategies is very effective in certain circumstances. It’s useful where the risk can be quantified and accurately predicted. A prime example of this is the risk associated with identity theft. Many financial institutions effectively apply risk mitigation calculations to determine whether a given measure which will reduce losses due to identity theft will cost more to implement than just accepting the losses. As long as the losses can be accurately calculated a priori, this method is very valid.
Again, I realize that there is plenty room for improvement in the field of security engineering. Regardless, for most threats, security engineering is rather successful overall. We know how to make systems more secure than they are now, but we prefer not to. So if systems aren’t as secure as they should be, it’s usually because we didn’t design them to be secure. Sure, part of engineering is finding solutions that satisfy multiple parameters at the same time. These advances will continue to make security more compatible with ease of use and flexibility. Improved standards will continue to raise the minimum bar of security, while minimizing the additional cost of doing so. However, I believe most systems are secure enough, or at least as secure as we wanted them to be.
It should be noted that the adequacy of the security provided by most systems is not provided solely by the system itself but is supported by external factors such as legal protections. For example, the physical security of most houses is only good enough to make it difficult, well maybe even only inconvenient, for would-be burglars. The vast majority of the deterrence comes in fear of getting caught. Furthermore, insurance provides a very cost effective means of protecting your investments despite the remote risk of burglary.
Security Engineering Can’t Solve Targeted Attacks
The biggest problem with targeted attacks isn’t that security engineering couldn’t provide effective solutions. Our current systems aren’t secure enough to protect us from targeted attacks because we haven’t asked them to be that secure. Furthermore, I don’t think we want them to be that secure. Even if it was possible to make a machine that was 100% secure, I doubt it could ever be used for much of consequence while maintaining that level of security due to weaknesses in the environment, people, and processes.
Let’s return to the example of the residential physical security. Imagine if you took away the deterrence offered by law enforcement. It’s hard to imagine, but let’s say would-be attackers had basically no external deterrence and the only thing between them and your possessions in your house was you and your house. You’d have to go to some very extreme measures to keep your house secure. Simple locks and even an alarm system wouldn’t cut it. Basically in absence of any other deterrent, to defeat a rational burglar the defenses on your house would have to cost the attacker more to circumvent than the value that he could gain from sacking the house. This is a tough asymmetric situation where your defenses have to be perfect and the persistent burglar only has to find one weakness or one weak moment. He can try over and over again, as failed attempts don’t cost him much. It doesn’t take much imagination to see how living in a house like this wouldn’t be much fun.
Ok, now pretend you have something valuable to a small set of burglars. Let’s say you have something like a highly coveted recipe for cinnamon rolls. Let’s say a small set of burglars really want to make their own sweat buns instead of buying yours, and possibly sell them to your customers. The problem with this is that you can’t take out an insurance policy on your roll recipe very easily. How could you quantify the cost of exposure? How could you prove the secret was really lost if you suspect it was? How many times would insurance compensate you—only on the first loss or on all subsequent losses? Insurance just doesn’t work in this case. Insurance policies work great for easily replaceable items like televisions, cars, etc, but they just don’t work well for things like trade secrets.
Targeted attacks are much like the scenario laid out above. Sadly, there is little to no deterrence. Usually the information targeted is highly valuable, but not easily quantifiable. Lastly, while it technically would be possible to engineer defenses that would be effective, very few people really want to live the resulting vault in fort knox, let alone pay for the construction.
Alternatives to Security Engineering
So if it’s not feasible to pursue a pure engineering solution to defend against targeted attacks, what is to be done? First of all, a lot of other non-technical solutions should be pursued. I’ll refrain from discussing legal, political, diplomatic, military, etc. solutions because most of us only have a minor influence on these domains and my experience is pretty thin in these areas. However, I do think it’s clear that in many cases, non-technical solutions would be the most effective solutions to the problem. It should also be clear by the empty public statements made by many leaders and decision makers in this realm that non-technical solutions on an international scale are going to take a while, if they ever come.
Security engineering is part of the solution. In many cases, we do need to engineer more secure solutions. We need to make security cheaper and easier. However, even with the best minds on the problem, this will only help so much. While our users need to improve their resilience to social engineering, in many cases, targeted attacks are so done so well, that I couldn’t fault a user for being duped.
Previously I discussed how keeping targeted attacks secret kills R&D. In that case, I wasn’t speaking of security engineering as applied to all IT systems, but was referring to the small subset of IT infrastructure dedicated primarily to security (ex. IDS, SIMS, etc). In that post I echoed the claim of others that threat focused response or security intelligence is one of the most effective approaches to responding to targeted attacks. Surely, this incident response approach will require some engineering of tools to support this approach, in addition to the general security engineering that will come out of proper incident response. Correctly prioritizing your engineering resources to deal with targeted attacks will often result in allocation of resources to tools that support an intelligence driven response.
I often imagine that a well functioning threat focused incident response team facing targeted attacks is much like the wolf and sheepdog cartoon. While the sheep aren’t particularly well protected, and really can’t be if they are to graze successfully, the sheepdog watches for the ever present wolf. The sheepdog keeps track of the wolf and counters his efforts directly, instead of trying to remedy every possible vulnerability. I recognize that as the sheepdog is invariably successful, this comparison is a little more ideal than reality will probably ever be. However, focusing a concentrated intelligence effort on a relatively small group of highly sophisticated attackers makes a lot of sense as long as the group of advanced attackers is small and the effort to defend against them is much higher than against other vanilla threats.
I’ve done both security engineering and engineering for security intelligence. Both have their place. Both have their success stories and both have numerous opportunities for improvement. However, blaming security engineering for the impact of targeted attacks is a herring as red as they come. A world where security engineering actually tried to solve highly targeted and determined attackers would not be a fun place in which to live. In absence of other solutions, an intelligence driven incident response model is your best bet. If I haven’t been able to convince you of this, then all I have to say is that Chewbacca lives on Endor and that just doesn’t make sense… Blaming security engineering for target attacks: that does not make sense.