Friday, November 30, 2012

Forever Alone Supervillain?

I recently read an article by Kevin Kelly discussing the impossibility of a Hollywood-style lone supervillain killing large numbers of people on his own, arguing that the power of an individual to kill has not increased over time. Even large-scale acts of terrorism depend on teams, not to mention entire networks of support personnel.

Yet this, or any analysis that seeks to predict the future based on current knowledge, cannot help but overlook the possibility of Black Swans. The largest event to date is no guide to even larger events that could occur but have yet to. So is there a fundamental obstacle to mass killing by an individual, or are we less safe than we (or at least Kelly) think we are?

The article offers two main reasons why this should be so, which are that killing large numbers of people is a complex task, and that social resistance hinders recruitment of resources. Which got my inner evil genius wondering if there were ways to bypass these difficulties.

Perhaps these restrictions only apply to certain types of resources. The capacity to recruit and use physical resources scales linearly with manpower, whereas that for abstract resource does not. David Deutsch has argued that any physical states that are not forbidden by the laws of nature are necessarily achievable with the right knowledge, matter and energy. Insofar as any such villain seeks an achievable state of affairs, their limiting factor must be knowledge. If their plan has minimal physical dependency, relying on ideas over matériel, then they are no longer limited by their solitude.

Supervillains seem to run up against Ashby's Law, which suggests that the number of states one system must be able to take on in order to modify an autoregulatory system must be greater than the number of states that autoregulatory system can take on, i.e. you need to be more adaptable than the thing you are trying to change. But the mind has (if used right, anyway) potentially more Kolmogorov complexity than most systems, since systems can be represented by simpler models in the mind. And even that level of complexity is not requisite if a momentary disturbance is all that is desired; the climate will outlive any butterfly, but the much-vaunted butterfly of chaotic alignment can effect a storm by tipping a fragile system over the edge, without being nearly as large as Mothra. As in tai chi, it is wiser to defeat a system by using its momentum against itself. And it is always easier to perturb something trying to stay still than to defend it from threats of unknown nature.

What if we rethink our conception of aetiology? We live in a complex world where causal webs are densely interconnected, and Aristotle's four causes become myriad. The assassins of Archduke Franz Ferdinand and his wife triggered World War I and caused the death of much more than two people, even if they cannot be held responsible. Perhaps they weren't even a sine qua non, and war would have broken out eventually anyway, but in our history they became causes. From this perspective, can we say that a warmonger requires the assistance of thousands of soldiers or an institute of nuclear scientists to kill millions? After all, those are already in position, waiting for just such a one to turn up. Of course, there are often checks and balances in most political systems, but the possibility remains. But what of dangerous ideologies propounded by a single revolutionary? These move populaces, which are not obliged to listen to the other side of the argument. Much more covert triggers are also possible - consider how the professor who rejected Hitler's application for art school could have toppled a chain of dominoes leading to the deaths of millions. This also gives us, chillingly, the possibility of what Edogawa Rampo termed the perfect crime, one for which the criminal could not be discovered or judged guilty. Of course, as in Rampo's short stories, criminals view these as works of pride and can never help boasting of them. But by then it is already too late for us.

Conversely, some actors may be upstream of many more agents and events that lead to mass murder. They are crucial nodes in the network, and stopping them could have a tremendous positive impact if they could be identified beforehand. But perhaps it is safer to reduce our vulnerabilities instead. It is too slow for us to be stationary and reactive; we must go on the offensive. Testers could seek out weaknesses in society and infrastructure. We could even hire potential supervillains to carry out the reviews, much like how companies may hire hackers to hacker-proof their networks. Because if vulnerabilities exist, they're going to be discovered eventually, whether intentionally or inadvertently. We might as well make sure they are discovered by someone on our side. Crowdsourcing to the evil-genius-wannabes lurking out there would work, as would designing a supervillain emulator. So perhaps a hostile post-Singularity superintelligence wouldn't be such a bad thing for humanity after all. It would make for a bloody good game of cat-and-mouse, at the very least. Or Pinky-and-the-Brain, as we plot our own demise together.

No comments:

Post a Comment