copyright notice
link to the published version: IEEE Computer, January, 2023

accesses since January 18, 2023


Hal Berghel

ABSTRACT: The sovereignty of cyberspace is an illusion. So is the dictator's dilemma. But the authoritarian's dilemma is another matter altogether.

Cyber-optimists seem to be in continuous search for cyber-utopias. They are committed to the belief that technology will be able to fix problems that otherwise defy solution. Such beliefs seem to be as natural of a part of behavioral modernity in humans as dance, ritual, and the use of tools Gould and Lewontin refer to such quasi-intellectual evolutionary spinoffs as ‘biological spandrels.' [GOULD] This account is quite creative in combining a useful history of such phenomena while avoiding excessive epistemological baggage. For whatever reasons, there seems to be a human propensity for such Panglossian optimism.


I highlighted one example in an article [ BERG1] about the so-called Dictator's Dilemma popularized by former Secretary of State, George Schultz, in 1985:

Totalitarian societies face a dilemma: either they try to stifle these technologies and thereby fall further behind in the new industrial revolution, or else they permit these technologies and see their totalitarian control inevitably eroded. [SCHULTZ]

By technologies, Schultz is referring to those that frame the information age and that contribute to the “free flow of information.” Schultz claims that “totalitarian states fear this information revolution perhaps even more than they fear Western military strength.” The hyperbole camouflages a transparently false dilemma. Tyrants, dictators, and brutal autocrats have a wide variety of digital tools at their disposal to censor and subdue discussion such as de-hosting websites, protocol blocking, geo blocking, bandwidth shaping, throttling, etc. Fear is usually sufficient to prevent significant blowback from the public. One only has to consult human rights advocacy groups like Human Rights Watch ( ) to develop an appreciation for the widespread global oppression in dictatorial regimes where modern computing, telecommunications, and networking is widely available.

And we must also include so-called “illiberal democracies” [ILLIB] in the mix where some semblance of suffrage is challenged by significant censorship – digital and otherwise. Even two characteristically liberal democracies, the U.S. and UK, both use censorship in matters claimed to be related to their national security, loosely defined. But it must be admitted, tyrants and dictators find censorship simpler and more direct than even illiberal democracies: fear, intimidation and possibly death are much more effective than sedition laws, official secrets acts, memory and gag laws, and the manipulation of mass media by special interests in both the public and private sectors. [MEM-GAG][HERMAN][BENNETT]

Schultz had it completely wrong. Dictators face no such dilemma! Recent political suppression in Iran make clear [THORECKE][WERON][RAY] that the Dictator's Dilemma is not just a false dilemma, but a naïve political observation that has never been historically or technologically grounded.

A corollary to Kranzberg's First Law [KRANZBERG] is appropriate: from the point of view of geopolitics, technology is neither enabling nor obstructing; nor is it irrelevant. Relevance of technology to politics is a mixed bag: important, to be sure, but its consequences are not always predictable, obvious, or determinant. The reason for this is clear. Technologies do not usually arise from spontaneous, pure intentions. Rather, they are the products of complex motives, mixed intentions, and opaque focus. Once the extent of its lethality of Zyklon B was understood, its use raced beyond delousing, fumigation and pest control. Chemical and biological weapons, the atomic bomb, land mines, torture devices and sundry anti-personnel weapons, all had varied origins and responded to complex interests and intentions. Discussions about them require nuance as their lineages are typically multi-threaded and variegated.

The dictator's dilemma is but one example of the Inauspicious augury associated with misplaced confidence in the sanctity of technology – and especially the sovereignty of the Internet. At risk of appearing presumptuous, we'll subsume the cluster of such beliefs under label of the sortilege syndrome . On this account, digital technology is assumed to have powers to right wrongs, save economies, ensure enduring global peace, produce whiter whites without bleach, and cure the common cold. If you think the sortilege syndrome is just plain fantasy, read on. A lot of important, though misguided, people actually believe this stuff.


The idea of superheroes with superpowers is exceptionally appealing and recurs throughout recorded history. There is just something soothing about off-loading the most vexing problems of life to invisible, omniscient, omnipotent forces. While this phenomenon was a staple of the action comic books in American life for much of the twentieth century, I first took note of it in a cultural anthropology lecture in college. The instructor spoke of a Melanesian and Polynesian concept of mana – an invisible life force that could protect and heal – that was as old as their languages. What I found most interesting about the concept of mana, was the strength of the popular belief in it despite its fundamentally undependable, unpredictable, and non-confirmable nature. According to the lecturer, mana was considered ubiquitous. It was everywhere at once. However, whether at any given moment in time an individual had it could only to be determined after the fact. Thus, if a villager canoed up a river with all of its attendant dangers and returned with food, it was obvious that the villager possessed mana pro tem. But if he never returned, or was killed in route, the converse was the case. Of course this led cultures to develop rituals to encourage the spread of mana among them. At this point a little enlightenment would have gone a long way but, then as now, enlightenment was a scarce commodity within the tribe. Were the villagers sufficiently enlightened, they could have observed that not only could mana claims never be falsified, they could only be retroactively verified: in other words, they were epistemically vacuous. In terms of modern science, we would say that mana claims had zero predictive, explanatory and descriptive value. This was my first exposure to what I've subsequently labelled the Elephant Bane Gambit. [ELEPHANT] Nobel Laureate Richard Feynman used the phrase Cargo Cult Science to describe similar phenomena. [FEYNMAN] But no matter the label, such limitations didn't disabuse the villagers from their beliefs in mana then, and doesn't disabuse modern delusionary tribalists from their unsubstantiated opinions and beliefs today.

Social scientists have been studying such phenomena for centuries. Of present concern is the way that this has inserted itself into our evaluation of computer and networking technology. I find this even more fascinating than the story of mana. Although the characters have changed, and the supporting rationales seem more erudite, the motives remain the same: a fundamental desire to understand and deal with uncertainty, to justify a willful optimism, and to fit within a satisfying world order, real or imagined. I would be remiss if I failed to point out early on that the prime support does not come from the technology sector, but rather from those who read partisan advantage into technology advances whether justified or not. This point will become clearer soon.


Most of us recognize the sortilege syndrome as it applied to computer and networking systems with stories about the so-called “Twitter revolutions” in recent years that purportedly demonstrated the relationship between the effectiveness of social activism and the availability of social media and other Internet resources. Many observers have been lulled into the belief that online and networked resources actually made recent social movements like the Twitter Revolution and the Arab Spring possible. Although these beliefs have been largely discredited, [MOROZOV1] [MOROZOV2] they simply won't go away (hence the Iran case mentioned above). For that reason, they're a reasonable object of further study.

An early articulation of a blind faith in computing technology is to be found in John Perry Barlow's 1996 Declaration of the Independence of Cyberspace. [BARLOW1] [BARLOW2] Barlow's diatribe included his conviction that “We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” This conviction rings pretty hollow today to the billions of people who live under oppressive regimes. While to some extent social media can have measurable disruptive effects in more democratic societies (e.g., QAnon, 4chan), it has never proven consistently effective in regime change. While in western democracies it remains for the most part true that one may represent virtually any views online without fear of state reprisal, it is folly to think that even then such communication goes unmonitored or unpunished. Should these views become bothersome to the prevailing power elite, whether governmental, corporate, or tribal, consequences of some sort are likely to follow. Memory and gag laws are testimony to that fact.

The reason that Barlow's conviction had currency is that most people seem caught up in a technology irrationalism that failed to understand technology within the context of existing cultural, political, and economic realities. This failure also ignores the bulk of technology absurdism (development of technology that ignores, fails to appreciate, or underrepresents obvious negative externalities) which we face. [ABSURDISM] As a result, society tends to look at individual examples of absurdist technology as isolated cases, when in fact they are endemic byproducts of far greater social problems. Consider the following examples:

Let's be very clear about these negative externalities: they were baked into the product development either by design, negligence, or incompetence. The law recognizes responsibilities in such cases under the rubric of the phrase “knew or should have known.” Being incompetent, not knowing what you're doing, or being ignorant of applicable laws are not considered adequate excuses for downstream liabilities. In fact, product liability law recognizes that the consumer is entitled some degree of professionalism and care by developers and manufacturers that sell to the public.

Barlow's arrogance betrays a naivety that Internet innovators, or innovators of any technology for that matter, could define their own political and economic reality. Too many technologists drank that Kool Aid to the peril of society. The idea that advanced technology could be immune to the forces that create leveraged buyouts, anti-competitive practices, monopolies, technology absurdism, environmental threats, and the like is absurd. We are, all of us, subject to the same social and political exigencies.


The false Dictator's Dilemma, the Twitter Revolution, the effectiveness of social media on the Arab Spring revolution, Barlow's naïve optimism, etc. have all taken on lives of their own in political mythology despite absence of empirical validation. Their currency and latency are products of the fact that they comport well with a partisan political narrative that buttresses a very narrow, global world view, and not that there is any evidence to support these claims. These myths should be eagerly cast aside with immense scholarly satisfaction.