Skip to comments.When Technology Overtakes Security
Posted on 04/20/2013 5:07:56 PM PDT by zeugma
A core, not side, effect of technology is its ability to magnify power and multiply force -- for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.
The problem is that it's not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They're more nimble and adaptable than defensive institutions like police forces. They're not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side -- it's easier to destroy something than it is to prevent, defend against, or recover from that destruction.
For the most part, though, society still wins. The bad guys simply can't do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?
I don't think it can.
Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious...and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.
This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.
As the destructive power of individual actors and fringe groups increases, so do the calls for -- and society's acceptance of -- increased security.
Traditional security largely works "after the fact." We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they're exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).
When that isn't enough, we resort to "before-the-fact"; security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.
But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.
And in the global interconnected world we live in, they're not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We're already almost entirely living in a surveillance state, though we don't realize it or won't admit it to ourselves. This will only get worse as technology advances; today's Ph.D. theses are tomorrow's high-school science-fair projects.
Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and "Minority Report"-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn't that these security measures won't work -- even as they shred our freedoms and liberties -- it's that no security is perfect.
Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We'll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.
As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of *someone* in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn't kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?
If security won't work in the end, what is the solution?
Resilience -- building systems able to survive unexpected and devastating attacks -- is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.
If the U.S. can survive the destruction of an entire city -- witness New Orleans after Hurricane Katrina or even New York after Sandy -- we need to start acting like it, and planning for it. Still, it's hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don't know how to adapt any defenses -- including resilience -- fast enough.
We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We're going to have to figure this out if we want to survive, and I'm not sure how many decades we have left.
This essay originally appeared on Wired.com.
Cory Doctorow on broad technology prohibitions:
Terrorism is not an existential threat:
New regimes of trust:
Security is defense. Where's Harry Truman?
Not even sure that I agree with the premise of the author.
Ultimately, it is the internal controls inside each individual that provides the security. We have had “progressives” working hard to undermine this type of control for about a hundred years.
We may end up proving that church and family are necessary (but not sufficient) for civilization. We also need the long tradition of everything that makes up western civilization, including individual responsibility, limited government, and public service.
There are a lot of holes in this article, someone who does not understand how things really work or it’s meant to make a scare/attract readers/etc.
Life, liberty and the pursuit and destruction of totalitarians.
For instance? ...
It never mentions, for example, in talking about IT, the real world sysadmin job, done every day by many “good” sysadmins, of proactively making sure the network and machines the admin is responsible for are secure.
It is always a before, during and after the fact. But the most important is the before. Servers must configured securely and correctly when they are first installed before they are put into use. Any doors left open thinking they will be closed later invariably are forgotten about. Of course, the sysadmin is constantly learning and keeping up to date, and constantly planning and implementing efforts to keep things secure as well as performing reliably and well. Good sysadmins have always done that, though.
This is a timeless concept, low-tech or high-tech.
Software is not really getting that much more “advanced” in the sense of inventing something novel, mostly today there are simply incremental improvements that are obvious next steps needed to improve.
A hardware example is the cell phone. The hardware is just a tiny telephone device tied into a tiny radio device, two things already invented over a century ago and just improved upon since then.
The idea that new software comes out and has security holes in it is good old-fashioned poor workmanship; the business goal is to make the thing complicated and require updates. Sloppiness with not thinking through security and other “boring internals” is tolerated as long as some sexy new feature can be touted, that’s really just a buzzword with a new look on the screen for a standard paradigm that’s been around forever.
At the end of the day, computers still do what they have for decades if you drop out the lingo and realize that new colors, images and fonts are just presentation, not function. The biggest change in widely-used programming languages has been object-orientedness, but that concept is over 20 years old.
Most of the security problems today: operating systems being bloated and poorly designed, requiring a lot of admin effort, and sometimes people just being lax or too busy to really do things the way they should be done, or misguided management that establishes the wrong priorities.
Well stated. In the end, the only secure computer is one that is stand-alone, but that won't work for obvious reasons. IMO, still too much effort in keeping "wide-open" connectivity instead of compartmentalizing smaller networks with bottleneck access to the "cloud". At least we would be able to get things done in a safer and more timely manner than current conditions. Compartmentalizing the control/service centers has resulted in "packet jams" and poor service for all. I believe that after the boom of "I-know-something-administrators" (back when someone who could write a simple batch file was revered as a geek) and the current philosophy of saving money and underpaying those who actually have the skill sets necessary, we have come full circle to where they will need to beef up the sysadmin crews and give them smaller chunks to manage.