articles

The Immaterial Aristocracy of the Internet

By Harry Halpin, 5 May 2008
Image: Theo Michael

Taking issue with the argument that, after decentralisation, control is embodied within the protocols of networks, Harry Halpin gives a historical account of the all-too-human actors vying for power over the net. Not technical standards but immaterial aristocrats rule cyberspace and their seats of power are vulnerable to revolutionary attack

 

Is there anything redeeming in the net? It all seemed so revolutionary not so long ago, but today it appears this revolutionary potential is spent. Is this disillusionment symptomatic ofthe structure of the net itself? Such is the analysis presented in Alexander Galloway and Eugene Thacker's book, The Exploit. However, I think it is problematic at best to forsake the net’s revolutionary potential at this point. My general impression of Galloway’s previous work Protocol: How Control Exists After Decentralization, is that while it is undoubtedly some of the best workin ‘new media’ studies to be produced in recent years, it leads ultimately not to action but to paranoia. While Galloway notes correctly that protocols ‘are a language that regulates flow, directs netspace, codes relationships, and connects life-forms’, he does not seem to understand that without protocols, communication would be impossible.[1] So while protocols embody and enact the way ‘control exists after decentralisation’, he goes further and concludes that the ‘ruling elite is tired of trees too’ and that due to protocols ‘the internet is the most highly controlled mass media hitherto known’.[2] Unlike his normally lucid (and even occasionally Marx inspired) analysis, towards the end of Protocol Galloway sounds like a conspiracy theorist of the internet. Has he ever tried setting up a pirate radio station to share information, or a public television channel, rather than a website? In the moralising manner characteristic of many American anarchists, all control is viewed as inherently antithetical to any revolutionary project.

Let’s think twice about protocol. Both control and communication are expressed through shared convention; when this entails a voluntarily shared convention, as with a technical communications system that can theoretically transmit any message regardless of its content, then is this really control? Indeed, some minimal organisation, the holding of conventions in common, is necessary for communication to be possible at all, as Davidson and Wittgenstein observed.[3] And if the ‘common’ in communication is necessary for any sort of commons then protocols are necessary and indeed foundational for the emergence of collectivity, including revolutionary kinds. Yet it is far safer to see control as counter-revolution, since this would seem to justify a retreat into critique rather than practice. To his credit, Galloway resists this alternative, and instead posits as revolutionary subject those who seek hypertrophy of the net such as hackers and net artists. But if it seems schizophrenic to think that protocols and networks can be used both within and against capital, then so be it. What is healthier, a schizophrenic out surfing on the net or a paranoiac on the couch?

Instead of the invaluable essay on how networks can be used against networks that I was expecting, a sort of Clausewitz for the modern-age, The Exploit tries to push beyond networks to what its authors call the ‘anti-web’. After spending most of the first half of the book going through increasingly self-affirming reflections, characterising protocol as the source of individuation both in DNA and man made networks, they come to a great conclusion: ‘to be effective, future political movements must discover a new exploit’. Borrowing the hacker term for a piece of software that takes advantage of a bug or glitch, they define the exploit as

a resonant flaw designed to resist, threaten, and ultimately desert the dominant political diagram.[4]

While we must agree that something is needed, the ‘counter-protocol’ proposed towards the end of the book comes down to a focus on the ‘quality of interactions’ and, with the figure of the ‘unhuman’, a rather predictable fetishisation of viruses and swarms – phenomena that are hardly incompatible with networks, incidentally. The ‘Note for a Liberated Computer Language’ with which they conclude provides a useless programming language involving constructs like ‘envision’ and ‘obfuscate’; a sort of retreat into neo-surrealism. If one follows this path, one may well end up concurring that ‘the future avant-garde practices will be those of non-existence’.[5] This would lead also to the non-existence of any movement beyond capitalism, since non-existence in communication networks brings depressing isolation rather than the creation of revolutionary collectivity.

I think the problem with The Exploit is encapsulated in its title, which valorises the system-breaking achievement of so-called ‘hackers’, more often than not script kiddies and scammers pursuing financial gain rather than self organisation of the net. Free software pioneer and ‘copyleft’ inventor Richard Stallman illuminatingly describes hacking not as a rejection of humanity, but as the creation of community and the practice of joy:

It was not at all uncommon to find people falling asleep at the lab, again because of their enthusiasm; you stay up as long as you possibly can hacking, because you just don’t want to stop.[6]

The joy of hacking comes more from the creation of something new and clever – including protocols – not simply ‘breaking’ into a system while still maintaining its previous paradigm. Breaking into a system to explore how it works would qualify as hacking, while breaking into a system for commercial gain would not. As Richard Stallman explains, ‘hacking means exploring the limits of what is possible in a spirit of playful cleverness’.[7] What better definition also of a revolutionary? Not surprisingly, hackers are the core of the community that create the protocols of the net.

Galloway is correct to point out that there is control in the internet, but instead of reifying the protocol or even network form itself, an ontological mistake that would be like blaming capitalism on the factory, it would be more suitable to realise that protocols embody social relationships. Just as genuine humans control factories, genuine humans – with names and addresses – create protocols. These humans can and do embody social relations that in turn can be considered abstractions, including those determined by the abstraction that is capital. But studying protocol as if it were first and foremost an abstraction without studying the historic and dialectic movement of the social forms which give rise to the protocols neglects Marx’s insight that

[Technologies] are organs of the human brain, created bythe human hand; the power of knowledge, objectified.[8]

Bearing protocols’ human origination in mind, there is no reason why they must be reified into a form of abstract control when they can also be considered the solution to a set of problems faced by individuals within particular historical circumstances. If they now operate as abstract forms of control, there is no reason why protocols could not also be abstract forms of collectivity.  Instead of hoping for an exodus from protocols by virtue of art, perhaps one could inspect the motivations, finances, and structure of the human agents that create them in order to gain a more strategic vantage point. Some of these are hackers, while others are government bureaucrats or representatives of corporations – although it would seem that hackers usually create the protocols that actually work and gain widespread success. To the extent that those protocols are accepted, this class that I dub the ‘immaterial aristocracy’ governs the net. It behoves us to inspect the concept of digital sovereignty in order to discover which precise body or bodies have control over it.

The Network of Networks

Although popular legend has it that the internet was created to survive a nuclear war, Charles Herzfeld (former director of DARPA, the Defence Advanced Research Projects Agency responsible for funding what became the internet) notes that this is a misconception.

In fact, the internet came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators who should have access to them were geographically separated from them.[9]

The internet was meant to unite diverse resources and increase communication among computing pioneers. In 1962, J.C.R. Licklider of MIT proposed the creation of a ‘Galactic Network’ of machines and, after obtaining leadership of DARPA, he proceeded to fund this project. Under his supervision the initial ARPANet came into being.

 

Before Licklider’s idea of the ‘Galactic Network’, networks were assumed to be static and closed systems. One either communicated with a network or one did not. However, early network researchers determined that there could be an ‘open architecture networking’ where a meta-level ‘internet working architecture’ would allow diverse networks to connect toeach other, so that

they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to end service.[10]

This concept became the ‘Network of Networks’ or the ‘internet’ – anticipating the structure of later social movements. While the internet architecture provided the motivating abstract concepts, it did not define at the outset a ‘scalable transfer protocol’ – a concrete mechanism that could actually move the bits from one network to another. Robert Kahn and Vint Cerf devised a protocol that took into account four key factors:

1. Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the internet.

2. Communications would be on a best effort basis. If a packet didn’t make it to the final destination, it would shortly be retransmitted from the source.

3. Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained bythe gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.

4. There would be no global control at the operations level.

The solution to this problem was TCP/IP. Data is subdivided into ‘packets’ that are all treated independently by the network. Any data sent over the internet is divided into relatively equal size packets by TCP (Transmission Control Protocol), which then sends the packets over the network using IP (Internet Protocol). Each computer has an Internet Number, a four byte destination address such as 152.2.210.122, and IP routes the system through various black-boxes, like gateways and routers, that do not try to reconstruct the original data from the packet. At the recipient end, TCP collects the incoming packets and then reconstructs the data. This protocol, which allows large sections of the network to be removed, is the most powerful technological ancestor of the network form of organisation.

While the system is decentralised in principle, in reality it is a hybrid with centralised elements. The key assignment of IP addresses to individual machines (the mapping of domains like http://www.ibiblio.org to an IP address like 152.46.7.122) comes from a hierarchical domain name authority controlled by a centralised body, namely ICANN. Futhermore, this entire process relies on a small number of top-level name servers.

This is a system vulnerable to flaws in the protocols used to exchange domain name information, as exemplified by the Pakistani government’s recent blocking of YouTube. More radically democratic structures of digital sovereignty could probably prevent such blocking in the first place. Indeed, it is the historical origins and function of these bodies of digital sovereignty that need exploration.

The First Immaterial Aristocracy

Although the internet was started by DARPA as a military-funded research project, it soon spread beyond the rarefied confines of the university. Once news of this ‘Universal Network’arrived, universities, corporations, and even foreign governments began to ‘plug in’ voluntarily. The internet became defined by voluntary adherence to open protocols and procedures defined by internet protocols. The coordination of such world-spanning internet standards soon became a social task that DARPA itself was less and less capable and willing to administer. As more and more nodes joined the internet, the military industrial research complex seemed less willing to fund and research it, perhaps realising that it was slowly spinning out of their control. In 1984 the US Military split its unclassified military network, MILNET, from the internet. No longerpurely under the aegis of DARPA, the internet began a political process of self organisationto establish a degree of autonomous digital sovereignty. Many academics and researchers then joined the Internet Research Steering Group (IRSG) to develop a long-term vision of the internet. With the academics and bureaucrats distracted, perhaps, the job of creating standards and maintaining the infrastructure fell into the hands of the hackers of the Internet Engineering Task Force (IETF). Unlike their predecessors, the hackers often did not possess postgraduate degrees in computer science, but they did have an intense commitment to the idea of a universal computer network.

The organisation of the IETF embodied the anarchic spirit of the hackers. It was an ad hoc and informal body with no board of directors, although it soon began electing the members of the Internet Architecture Board (IAB) – a committee of the non-profit Internet Society that oversees and ratifies the standards process of the net. However, the real actor in the creation of protocols was not the IAB or any other bureaucracy, but the Internet Engineering Task Force (IETF).The IETF credo, attributed to the first Chair of the IAB David Clark, is: ‘We reject kings, presidents, and voting. We believe in rough consensus and running code.’ True to its credo, the IETF operates by a radical democratic process. There are no official or even unofficial membership lists, and individuals are not paid to participate. Even if they belong to an organisation they must participate as an individual, and only participate voluntarily. Anyone may join, and ‘joining’ is defined only in terms of activity and contribution. Decisions do not have to be ratified by consensus or even majority voting, but require only a rough measure of agreement on an idea. IETF members prefer to judge an idea by actual implementation (running code), and arguments are decided by the effectiveness of practice. The structure of the IETF is defined by areas such as ‘Multimedia’ and ‘Security’ and then subdivided into Working Groups on particular standards such as ‘atompub’, the widely used Atom standard for syndication of web content. In these Working Groups most of the work of hashing out protocols takes place.

Groups have elected Chairs whose task is to keep the group on topic. Even within the always technical yet sometimes partisan debates, there are no formalities, and everyone from professors to teenagers are addressed by their first name. This kind of informal organisation tends to develop informal hierarchies, and these informal hierarchies are regarded as beneficial since they are composed usually of the most dedicated who volunteer the most of their time for the net: ‘A weekend is when you get up, put on comfortable clothes, and go into work to do your Steering Group work.’If the majority of participants in IETF feel that these informal hierarchies are getting in the way of practical work, then the chairs of Working Groups and other informal bureaucrats are removed by a voting process, which happened once to an entire clique of ‘informal leaders’ in 1992.The IETF is also mainly a virtual organisation since almost all communication is handled by email, although it does hold week-long plenary sessions three times a year which attract over a thousand participants, with anyone welcome. Even at these face-to-face gatherings, most of the truly groundbreaking discussions seem to happen in the still more informal ‘Birds of a Feather’ discussions. The most important product of these list-serv discussions and meetings are IETF RFCs ‘Request for Comments’, whose very name demonstrates their democratic practice. These RFCs define internet standards such as URLs (RFC 1945) and HTTP (RFC 3986).The IETF still exists and anyone can ‘join’ by simply participating in a list given on their homepage. The organisation of the IETF operates with little explicit financing, but many members are funded by their governments or corporate sponsors, nevertheless it is still open to those without financing.

The World Wide Web

One IETF participant, Tim Berners-Lee, had the vision of a ‘universal information space’ which he dubbed the ‘World Wide Web’.[11] His original proposal brings his belief in universality to the forefront:

We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics extra facilities.[12]

The IETF, perhaps due to its own anarchic nature, had produced a multitude of incompatible protocols. While protocols could each enable computers to communicate over the internet, there was no universal format for the various protocols. Tim Berners- Lee had a number of key concepts:

1. Calling anything that someone might want to communicate with over the Internet a ‘resource’.

2. Each resource could be given a universal resource identifier (URI) that allowed it to be identified and perhaps accessed. The word ‘universal’ was used to ‘emphasize the importance of universality, and of the persistence of information.’

3. The idea of simplifying hypertext as the emergence of a human-readable format for data over the web, so any document could link to any other document.

These three principles formed the foundation of the World Wide Web. In the IETF, Berners-Lee, along with many compatriots such as Larry Masinter, Dan Connolly, and Roy Fielding, spearheaded development of URIs, HTML (HyperText Markup Language) and HTTP (HyperText Transfer Protocol).  As Berners-Lee says, the creation of protocols was key to the web, ‘Since by being able to reference anything with equal ease,’ due to URIs, ‘a web of information would form’ based on

the few basic, common rules of ‘protocol’ that would allow one computer to talk to another, in such a way that when all computers everywhere did it, the system would thrive, not break down.[13]

In fact, the design of the web on top of the physical infrastructure of the internet is nothing but protocol.[14]

However, Berners-Lee was frustrated by the IETF, who in typically anarchic fashion, rejected his idea that any standard could be universal. At the time a more hierarchical file-research system known as ‘Gopher’ was the dominant way of navigating the internet. In one of the first cases of digital enclosure on the internet, the University of Michigan decided to charge corporate (but not academic and non-profit) users for the use of Gopher, and immediately the system became a digital pariah. Berners-Lee, seeing an opening for the World Wide Web, surrendered to the IETF and renamed URIs ‘Uniform Resource Locators’ (URLs). Crucially, he got CERN (the European Organisation for Nuclear Research) to release any intellectual property rights they had to the web, and he also managed to create running code for his new standard in the form of the first web browser. Berners-Lee and others served primarily as untiring activists, convincing talented hackers to spend their time creating web servers and web browsers, as well as navigating the political and social process of creating web standards. Within a year the web had spread over the world. In what might be seen as another historical irony, years before the idea of a universal political space was analysed by Hardt and Negri as ‘Empire’, hackers both articulated and created a universal technological space.

A Crisis in Digital Sovereignty

In the blink of an eye, adoption of the web skyrocketed and the immaterial aristocracy of the IETF lost control of it. Soon all the major corporations had a website. They sent their representatives to the IETF in an attempt to discover who the powerbrokers of the internet were, but instead found themselves immersed in obscure technical conversations and mystified by the lack of any formal body of which to seize control. Instead of taking over the IETF, corporations began ignoring it. They did this by violating standards in order to gain market adoption through ‘new’ features. The battle for market dominance between the two largest opponents, Microsoft and the upstart Netscape, was based on an arms race of features supposedly created for the benefit of web users. These ‘new features’ in reality soon led to a ‘lock-in’ of the web where certain sites could only be viewed by one particular commercial browser. This began to fracture the rapidly growing web into incompatible corporate fiefdoms, building upon the work but destroying the sovereignty of the IETF. Furthermore, the entire idea of the web as an open space of communication began to be challenged, albeit unsuccessfully, by Microsoft’s concept of ‘push content’ and channels, which in effect attempted to replicate television’s earlier hierarchical and one- way model on the internet.

Behind the scenes, the creators of the web were horrified by the fractures the corporate browser wars had caused in their universal information space. In particular, Tim Berners-Lee felt like his original dream had been betrayed by corporations trying to create their own mutually incompatible fiefdoms for profit. He correctly realised it was in the long-term interests of both corporations and web users to have a new form of digital sovereignty. With the unique but informal status Berners-Lee enjoyed as the ‘inventor of the Web’(although he freely and humbly admits that this was a collective endeavor), he decided to reconstitute digital sovereignty in the form of the World Wide Web Consortium (W3C).This non-profit organisation was dedicated to

leading the Web to its full potential by developing protocols and guidelines that ensure longterm growth for the Web.[15]

Corporations had ignored the IETF’s slow and impenetrable processes, so Berners-Lee’s moved from a model of absolute to representative democracy. The major corporations would understand and embrace this, and the web would harness their power while preserving its universality. With the rapid growth of the web, Berners-Lee believed that an absolute democracy based on informal principles could not react quickly enough to the desires of users and prevent corporations from fracturing universality for short term gain. Unlike the IETF, which only standardised protocols that were already widely used, the W3C would take a proactive stance to deploy standardised universal formats before various corporations or other forces could deploy them. Berners-Lee was made director for life of the W3C, and ultimately his decision remains final, constituting a sort of strange immaterial monarchy. Since Berners-Lee historically has not used his formal powers as director and approves what the membership says, his importance is minimised. The W3Cmarks the shift from the radical and open anarchy of the IETF to a more closed and representative system.

 

 

 

Digital Sovereignty Returns

W3C membership was open to any organisation, whether commercial, educational, governmental, for-profit ornot for profit. Unlike the IETF, membership came at a price. It would cost $50,000 for corporations with revenues in excess of $50 million, and $5,000 for smaller corporations and non-profits. It was organised as a strict representative democracy, with each member organisation sending one member to the Advisory Committee. However, in practice it allowed hacker participation by keeping its lists public, and allowing non-affiliated hackers to join its Working Groups for free under the ‘Invited Expert’ policy. By opening up a ‘vendor neutral’ space, companies previously ‘interested primarily in advancing the technology for their own benefit’ could be brought to the table. This move away from the total fiscal freedom of the IETF reflected the increasing amount of money at stake in the creation of protocols, and the money needed to run standards bodies. Rather shockingly, when the formation of theW3C was announced both Microsoft and Netscape agreed to join. As a point of pride, Netscape even paid the full $50,000 fee, though they weren’t required to.

Having the two parties most responsible for fracturing the web at the table provided the crucial break through for the W3C. It allowed them to begin standardisation of HTML in a vendor neutral format that would allow web pages to be viewed in any standards compliant browser. Berners-Lee’s cunning strategy to envelop the corporations within the digital sovereignty of the W3C worked:

The competitive nature of the group would drive the developments, and always bring everyone to the table for the next issue. Yet members also knew that collaboration was the most efficient way for everyone to grab a share of a rapidly growing pie.[16]

The original universal vision of the web was inscribed into W3C mission statement: to expand the reach of the web to ‘everyone, everything, everywhere’. Other standards that have been widely used, such as XML, have come out of the W3C. However, with the web growing rapidly in the era of ‘web 2.0’, the W3C itself is seen as slow and unwieldy with a political process too overwhelmed by corporate representatives. With Google’s rise to its new hegemonic position as the premier search engine, the web is increasingly centred around this highly secretive organisation, reminiscent of Microsoft’s monopolisation of the personal computer. Key members of the IAB and other protocol boards like Vint Cerf are also Google employees.

One example of this new political terrain is social networking.  The primary way most new users interact with the web is currently torn between Facebook and MySpace, heavily associated with Microsoft and Google respectively.  Users and developers for these services are increasingly tired of their data being hoarded by these companies in closed data silos. DataPortability.org represents an effort to open the data, a more anarchic body that may signal a return to the heavily decentralised governance typical of the IETF. In its latest redesign of HTML, the W3C has tried to open itself to a more IETF-like radically democratic process, allowing hundreds of unaffiliated hackers to join for free. The next few years will determine whether the web centralises under either Google or Microsoft, or if the W3C can prevent the next digital civil war. The immaterial aristocracy is definitely changing, and its next form is still unclear. Perhaps, in step with the open and free software movements,as the level of self-organisation of web developers and even users grows and they become increasingly capable of creating and maintaining these standards themselves, the immaterial aristocracy will finally dissolve.

Beyond Digital Sovereignty

This inspection of the social forms, historical organisation, and finances ofthe protocol-building bodies of the net is not a mere historical excursion. It has consequences for the concrete creation of revolutionary collectivity in the here and now. Many would decry the very idea that such collectivity can be developed through the net as utopian. In the face of imperialist geopolitics masquerading behind the war on terror and rampant accompanying paranoia, such a utopian perspective is revolutionary. Clearly, a merely utopian perspective is not enough, it needs to be combined with concrete action to move humanity beyond capital. One critique of Michael Hardt and Antonio Negri’s concept of ‘the multitude’ as the new networked revolutionary agent is that its proponents have no concrete plan for bringing it from the virtual to the actual. Fashionable post-autonomism in general leaves us with little else but utopian demands for global citizenship and social democratic reforms such as guaranteed basic income. An enquiry into the immaterial aristocracy can help us recognise the social relations that determine the technological infrastructure which enables the multitude’s social form, while not disappearing into ahistoricism.

The technical infrastructure of the web itself is a model for the multitude:

The internet is the prime example of this democratic network structure. An indeterminate and potentially unlimited number of interconnected nodes communicate with no central point of control, all nodes regardless of territorial location connect to all others through a myriad of potential paths and relays.[17]

Our main thesis is that the creation of these protocols which comprise the internet was not the work of sinister forces of control, but the collective work of committed individuals, the immaterial aristocracy. What is surprising is how little empirical work has been done on this issue by political revolutionaries – with a few notable exceptions such as the anarchist, Ian Heavens. Yet the whole development of the internet could easily have turned out otherwise. We could all be on Microsoft Network, and we are dangerously close to having Google take over the web. One can hear the echo of Mario Tronti’s comments on the unsung struggles of the working class:

[…] perhaps we would discover that ‘organisational miracles’ are always happening, and have always been happening.[18]

The problem is not that ‘the hardest point is the transition to organisation’ for the multitude.[19] The problem of the hour is the struggle to keep the non-hierarchical and non-centered structure of the web open, universal, and free so as to further enable the spread of new revolutionary forms of life – although the cost is the continual spread of capital not far behind. The dangers of a digital civil war are all too real, with signs ranging from the great firewall of China, the US military plans revealed in their Information Operation Roadmap to ‘fight the net as it would a weapons system’, to the development of a multi-tier net that privileges the traffic of certain corporations willing topay more, in effect crippling many independent websites and file-sharing programs. Having radicals participating in open bodies like the W3C and IETF may be necessary for the future survival of the web.

There is no Lenin in Silicon Valley, plotting the political programme of the network revolution. The beauty of the distributed network is that it makes the very idea of Lenin obsolete. Instead of retreating into neo-surrealism as The Exploit does, revolutionaries should be situationists, creating situations in which people realise their own strength through self-organisation. These situations are created not just by street protests and struggles over precarious labour, but through technical infrastructure. One example par excellence would be how the internet enabled the communication networks that created the ‘anti-globalisation’ movement. Of course, nets are not synonymous with revolution or even anti-capitalism, as the use of the net by corporations and governmental bodies to coordinate globalisation far outweighs its use by the ‘anti-globalisation’ movement. Still, given the paucity of any alternative put forward by Galloway and Thacker, the thesis that the very nature of protocol is inherently counter revolutionary seems to be a theoretical dead end. It would be more productive to acknowledge that political battles around net protocols are increasingly important avenues of struggle, and thebest weapon in this battle is history. A historical understanding of the protocols of the net can indeed lead to better andmore efficient strategic interventions.

‘Hackers’ and net artists’ struggles against protocol are not the only means of liberation. The vast majority of these interventions are unknown to the immaterial aristocracy and those outside the circles of ‘radical’ digerati. Instead, we should see the creation of new protocols as a terrain of struggle in itself. The best case in point might be the creation of the Extensible Messaging and PresenceProtocol, which took instant messaging out of the hands of private corporations like AOL and allowed instant messaging to be implemented in a decentralised and open manner. This in turn allowed secure technologies like ‘Off-the-Record’ instant messaging to be developed, a technology that can mean the difference between life and death for those fighting repressive regimes. This protocol may become increasingly important even in Britain, since it is now illegal to refuse to give police private keys for encrypted email. These trends are important for the future of any revolutionary project, and the concrete involvement of radicals in this particular terrain of struggle could be a determining factor in future of the net. Protocol is not only how control exists after decentralisation. Protocol is a how the common is created in decentralisation, another expression of humanity’s common desire for collectivity.

Harry Halpin <hhalpinATibiblio.org> is a researcher at the School of Informatics at the University of Edinburgh specialising in web technologies, and is a Chair of the W3C and a participant in the IETF. He enjoys reading critical theory and new media studies before collapsing to sleep.  And he used to live in a tree. http://www.ibiblio.org/hhalpin

Footnotes

[1] Alexander Galloway, Protocol: How Control Exists After Decentralization, Cambridge, MA: MIT Press, 2004, p. 74.

[2] Ibid, pp. 242-243.

[3] In particular, see Ludwig Wittgenstein, Philosophical Investigations translated by G.E.M. Anscombe. Oxford: Blackwell, 1963, and Donald Davidson, ‘On the Very Idea of a Conceptual Scheme’ in Proceedings and Addresses of the American Philosophical Association, Vol.47, 5-20. 1973.

[4] Alexander Galloway and Eugene Thacker, The Exploit: A Theory of Networks, Minneapolis, MN: University of Minnesota Press, 2007, pp. 20-21.

[5] Ibid, 2007, p. 136.

[6] Saul Williams, Free as in Freedom: Richard Stallman’s Crusade for Free Software, O’Reilly Media, 2002, http://www.oreilly.com/openbook/freedom/

[7] Ibid.

[8] Karl Marx, Grundrisse, http://www.marxists.org/archive/marx/works/1857/grundrisse/

[9] B. Leiner, V. Cerf, et al, A Brief History of the Internet, 2003, http://www.isoc.org/internet/history/brief.shtml

[10] Ibid.

[11] Tim Berners-Lee, Information Management: A Proposal, CERN, 1989, http://www.nic.funet.fi/index/FUNET/history/internet/w3c/proposal.html

[12] Ibid.

[13] Tim Berners-Lee, Weaving the Web, Harper Press, 1999, p.4.

[14] Ibid, p.36.

[15] Berners-Lee, http://www.w3.org/Consortium/

[16] Op. cit., 1999, p 138.

[17] Michael Hardt & Antonio Negri, Empire. Cambridge, MA: Harvard University Press, 2000, p. 299.

[18] Mario Tronti, Lenin in England, Classe Operaia (Working Class) No. 1, January 1964, http://www.geocities.com/immateriallabour/trontilenin-in-england.html

[19] Ibid.