Connecting People Apart

ISBN 978-1-906496-78-4 (published February 2012)

An eBook reader compiled from the Mute magazine article archive for the Post-Media Lab.

This eBook is free to use but if you would like to support Mute’s publishing work please send us a ten euro/pound/dollar donation to our PayPal account pay@metamute.org.

eBook made with Mute’s new online eBook conversion engine ‘Progressive Publishing System’ at https://mute-www.lshift.net, email us if you would like to make your own eBooks with a private beta account. Mail: simon@metamute.org The system is in beta so please bear with us while we fix the bugs.

Cover artwork: Theo Michael

Copyright Mute Publishing, 1994 - 2012

Legend

Except for that which originally appeared elsewhere and is republished here, all content is copyright Mute and the authors. However, Mute encourages use of its content for purposes that are non-commercial, critical, or disruptive of capitalist property relations. If you are in any doubt as to whether an intended use meets these criteria, please contact us <mute@metamute.org> before publishing. In any case, make sure you credit the author and Mute as the original publisher.

This legend is devised in the absence of a licence which adequately represents Mute's contributors' respective positions on copyright, and to acknowledge but deny the copyrighting performed by default where copyright is waived.

Mute content and commissions are considered as copyleft and are freely available for personal use or if you are an non-commercial organisation.

http://metamute.org/

http://postmedialab.org/

Credits: compilation Josephine Berry Slater, production Simon Worthington, cover layout Jerlyn Jareunpoon.

Table of contents

Articles

Media Lab Culture in the UK by Charlotte Frost, 2011

The Immaterial Aristocracy of the Internet By Harry Halpin, 5 May 2008

InfoEnclosure 2.0 By Dmytri Kleiner & Brian Wyrick, 29 January 2007

Special Insert: Net.Politics (The revolution shall not be criticised?) By Mute Editor, 1998

Open Source Development By Gilberto Câmara, 12 January 2004

Harvest Time on the Server Farm (Reaping the Net's Body Politic) By Roy Ascott, Sara Diamond, Geert Lovink and Pauline van Mourik Broekman, 10 September 2000

Mute in Conversation with Nettime (Pit Schultz) (Digital Publishing Feature) By Pauline van Mourik Broekman, 10 January 1997

Media Lab Culture in the UK

by Charlotte Frost (2011)

...the machine is always social before it is technical.

(Gilles Deleuze).

Download Media Lab Culture in Word format or PDF format.

http://www.artscouncil.org.uk/what-we-do/our-priorities-2011-15/digital-innovation/digital-resources/collaboration-and-freedom/essays-and-interviews/

Though the term ‘lab’ conjures the image of a fairly sanitised environment optimised for scientific experiments and populated by people in white coats, media labs – centres for creative experimentation – are quite different. At their most basic, they are spaces – mostly physical but sometimes also virtual – for sharing technological resources like computers, software and even perhaps highly expensive 3D printers; offering training; and supporting the types of collaborative research that do not easily reside elsewhere. In the early-to-mid-1990s, partly propelled by the exciting possibilities of the internet and associated web browser technologies, groups began to coalesce, bent on developing access to the inherent potential of collective creativity. With the exuberant new dot.com businesses fuelling a ‘creative economy’, the Californian ‘cybercafé’ (surf the internet and slurp the coffee) was emulated in urban centres around the UK and in some cases artists were heavily involved. They saw the internet’s myriad ways of changing the way we make, think about and share art – not to mention its capacity for social empowerment – and wanted to harness these qualities quickly and effectively. With many practitioners coming from the spaces, practices and communities forged by the independent film and video movement, the phenomenon of the UK media lab was born. However, despite the importance of these spaces as the hybrid homes of the then emergent and now embedded creative activities that characterise today’s rich field of digital and media practices, their history and contribution to current lab environments has been little discussed outside a niche arena.

Early Media Labs

Two of the earliest UK media labs were Artec and Backspace (aka Bakspc), both based in London. Artec, which was established in 1990, was initially funded by Islington Council and ESF (the European Social Fund), but soon won additional support from Arts Council England. Conceived by Frank Boyd and Derek Richards, its focus from the outset was to deploy technology for social empowerment and, early on, it provided valuable professional training to the long-term unemployed. In this sense, it did not operate from within an arts context proper, but combined art and technology in the name of social integration. Creative projects were led by Graham Harwood, whose own artistic practice and his collective Mongrel were formed through associations at Artec.

Harwood and Mongrel’s practice is known widely for scrutinising social, political and cultural divisions through a framework of technology. A notable piece from this period was Rehearsal

of Memory (1995), which took the collective experiences of staff and patients at Ashworth high security mental hospital, near Liverpool, and presented them as a unified and anonymous computer-based group portrait. Now available as a CD-ROM, the work strongly undermines the assumptions we make about mental health, blurring the line between those branded ‘normal’ or not. It is an excellent example of the way artists and media labs habitually combine creative activities with technology to give people a renewed agency. Around 1995, Peter Ride was brought on board to curate a stream of activity called Channel, which lead to further powerful artworks including Ubiquity (1997) by David Bickerstaff and Susan Collins’ In Conversation (1997).

Without regular public funding, Backspace started out as an independent self-organised cybercafé. Initiated by James Stevens as a ‘soft space’ adjunct to his commercial web design business, Obsolete, it had a physical studio and lounge on Clink Street. People could drop in and use the web access and computer terminals in exchange for a nominal membership fee and commitment to maintain the space. What is notable about the Backspace model is how it attempted to foster a co-operatively managed resource. It exemplified a preoccupation amongst internet culture devotees with autonomy and new forms of governance, and struggled with all the contradictions of such ideals alongside the fact of its commercial parent entity. Obsolete shared its (at that time) capacious bandwidth. This gave people web hosting and streaming capabilities that would otherwise have been prohibitively expensive; allowed for the hosting of many artistic projects produced within the space itself; and facilitated many early streaming experiments with link-ups between other European media labs including as E-lab in Riga, Lativa and Ljudmila in Llubljana, Slovenia. Early attendees and co-facilitators of Backspace now list some central figures of the Digital and New Media art fields including: Matt Fuller, Simon Pope, Armin Medosch, Heath Bunting, Ruth Catlow, Pete Gomes, Manu Luksch and Thomson and Craighead – even Turner Prize winner Mark Lecky was a regular for a while.

Globally distributed discussion networks provided a discursive layer for these media labs, with early mailing lists such as Nettime, Rhizome and Syndicate forging international connections around technology, art and politics. Likewise, Mute (at first a newspaper, then a glossy magazine, now a web journal) provided regular critical commentary on burgeoning digital culture.

Foundationally different, Artec and Backspace were united by a belief in the importance of access to tools and training within a social context. In slightly differing ways, they put creative experimentation and social concerns at the centre of the agenda via technology. This was to become an important organisational strategy for this sector. Though both spaces have since closed, Stevens continues to build social and technological infrastructure as Deckspace, at Borough Hall, Greenwich. Without a physical space, Frank Boyd has evolved his media lab system into an industry-orientated programme called Crossover, which assembles creative professionals to workshop cross-platform ‘experiences’ from a variety of creative arenas including film TV and the computer games industry. Crossover is one of many peripatetic media lab models that privilege collaborative creative processes, although it is more goal-orientated than most as participants often pitch to a panel of industry commissioners.

Process over Product

With less of an eye on industry and an abiding interest in the creative process itself, PVA MediaLab was formed in 1997 by artists Simon Poulter and Julie Penfold. In its first incarnation, it took up residence at Dartington College of the Arts, with funding from South West Arts. While there, artists were offered a well-equipped space in which to experiment with technology and develop ideas. In fact it is this developmental freedom that forms another core operational component of the media lab. Rather than asking artists to arrive with pre-formulated projects, or expecting them to see a piece through from start to finish, media labs have consistently placed value on self-determined exploration. PVA helps artists to manufacture methodologies rather than final artworks, fully designed products or content packages. They have also led the way in assisting other media labs to produce a similar system, through their Labculture programme. Highly itinerant, the Labculture model adjusts itself to host organisations, like Vivid, in Birmingham, so they can learn how to set and achieve goals while building the sorts of lasting partnerships that will sustain future activity.

This shared or Open Source way of working integral to media lab culture is also exemplified by GYOML (Grow Your Own Media Lab). A collaborative project between media labs Folly, Access Space and the Polytechnic, GYOML was designed to help generate more media lab initiatives. It has included: ‘GYOML in a Kitchen’, a sound recording and editing workshop by Steve Symons (Lancaster); ‘GYOML in a Van’, which staged an introductory workshop in media-lab culture for community group leaders (Lancaster); a game-centred ‘GYOML for teenagers’ (Rochdale); and ‘GYOML at the Canteen’, catering to film-makers and professional artists with an interest in open source (Barrow-in-Furness). Legacies of this project include the Digital Artists Handbook, an impressive guide to Open Source tools and techniques and ‘Grow Your Own Media Lab (the graphic novel)’, a set of inspiring case studies. Folly continue to work very much in this manner, forming essential infrastructural relationships as and where needed and guiding others through the adoption of free software.

Another example of this attention to operation and openess comes from GIST Lab, in Sheffield, which energises community-based projects through a space that hosts meetings and workshops. Even without a dedicated tech suite, their knowledge-exchange is a short-cut to all manner of original cross-over work, and they have supported yet another project that literally and metaphorically recreates aspects of the media lab model. 3D printing (or rapid prototyping) is increasingly popular in producing anything from car parts to jewellery, by layering materials like plastic into finished three-dimensional objects. RepRap, however, is able to print the spare parts it needs to be built while it is still itself under construction. Just like media labs, this self-replicating 3D Printer is all about sharing access to a successful system.

Ideas over Technology

If media labs are not driven by material production, neither are they all about technology. Arising from the work of the art group, Redundant Technology Initiative, Access Space in Sheffield established its media lab through the use of free and recycled technology and learning. Given our cultural predisposition for wanting the latest, fastest equipment and our reprehensible dumping of perfectly serviceable technology, abundant hardware is sourced from all manner of locations. The latest Free and Open Source software is installed on the hardware where expensive proprietary software once lay and the media lab space, complete with this equipment, is opened to the public five days a week. The one proviso placed on this access – continuing the recycling theme – is that once a media lab participant has learnt how to do something, they should pass this knowledge on. As evidence of the success of this system Access Space boasts impressive outreach capacity: more than a thousand regular visitors, of which only about thirty-five percent are university educated, and over half are unemployed, and they habitually work with people experiencing disabilities, learning disorders, poor health, homelessness or other measures of exclusion.

One of the projects that clearly shows what they do is Zero Dollar Laptop, a collaboration with the Furtherfield organisation and community. Through a series of workshops, homeless participants are given the ability to use and maintain a free laptop complete with free software in self-led creative projects. It is this model of learning through self-directed creativity that arises again and again in media labs because it provides demonstrable results in helping people acquire and retain the skills they need. Without ‘bells and whistles’ new technology, Access Space emphasise the importance of ideas over technology and demystify all manner of computer-based skills.

SPACE Studio’s MediaLab is also an excellent example of a lab working at a range of levels to offer beneficial specialised training. They teach software packages at a professional level to film makers, artists and a range of media industry workers, as well as offering film-making and media training for NEET (Not in Education, Employment or Training) teenagers in the local area. There are also a number of DIY Technology workshops including those regularly hosted by MzTEK who have expanded their operation as a result of their connections with SPACE. MzTEK are all about encouraging women to build technical skills and enter the new media sector. Growing from a small group to wide and supportive network they answer underdeveloped areas of knowledge. In addition to this, SPACE’s PERMACULTURES residency series has, to date, hosted eight residencies supporting over eleven artists, helping them explore technology and go on to show in a range of spaces.

Partnering Galleries

The media lab also plugs an important gap in the art gallery and museum network. Digital and New Media arts are distinctive for collapsing boundaries between the place of production and exhibition. As a result, few existing art spaces have been in a position to fully represent it. Media labs, as well as community websites like Furtherfield and Rhizome, international festivals including ISEA and Transmediale and curatorial resources like CRUMB (the Curatorial Resource for Upstart Media Bliss) have imaginatively responded to this situation. Media labs in particular have been very successful in fostering relationships between artists and galleries. They have helped to translate not only the ideas expressed by this type of art – which can require much additional contextualisation – but also their physical installation in spaces not designed for this new breed of work.

For example, Folly recently collaborated on an experiment in the exhibition and acquisition of New Media art with the Harris Museum and Art Gallery. Entitled Current, the project saw expert panels first select works to be exhibited at the gallery (in Spring 2011) and then choose one to enter the permanent collection. Not only did this give the gallery the chance to add a timely contemporary work to their collection but it formed a useful public case study showing other institutions how they might engage with emergent art forms in various new media.

Collaboration, Interdisciplinarity and the University

Media labs greatly contribute to the collaborative working methods the creative sector now thrives upon. Cross or interdisciplinary partnerships involve people from very different industries or working cultures combining and even reinventing the way they work in order to unearth all manner of new practices and products. Many universities, having born witness to a boom in research which straddles different academic subjects and industry sectors (due in some part to government funding imperatives around ‘knowledge transfer’), have established their own media labs. A relatively early example of this was i-DAT (the Institute of Digital Art and Technology) at the School of Computing, Communication and Electronics at the University of Plymouth. A large project with many interrelated strands is their op-sys (operating systems) network of research into architectural, biological, social and economic data and how it can be made publicly available and useful.

The University of Nottingham has the Mixed Reality Lab, which was established in 1999 with £1.2 million in funding from the JREI (Joint Research and Equipment Initiative) programme as well as ongoing grants and investments. Run by Steve Benford, it hosts around eighteen PhD students providing resources for researchers and post-graduates working in areas that intersect its host department, the School of Computer Science, and associated training facility, the Horizon Doctoral Training Centre. It maintains a number of diverse projects, some of which have won prestigious awards and award nominations including Can You See Me Now, a collaboration with Blast Theory. The CoDE (Cultures of the Digital Economy) Institute at Anglia Ruskin University in Cambridge has a digital performance laboratory that focuses on sound-based work. Culture Lab is Newcastle University’s bespoke unit of media-lab-style flexibility, where artists work experimentally and across disciplines, and Sandbox, a similar resource, is located at the University of Central Lancashire.

Another approach for universities is to partner with existing media labs. Pervasive Media Studio, a Bristol-located media lab, was set up by Watershed, a cross-artform production organisation, HP Labs and the South West Regional Development Agency. They have a partnership which runs for three years with the University of West England’s Digital Cultures Research Centre and work in a number of different ways including offering Graduate and New Talent residencies for those just starting out in their careers. The Pervasive Media Studio has helped to establish events like Igfest, the Interesting Games festival, held annually in Bristol, as well as development platforms such as Theatre Sandbox, which helps theatre makers introduce technology to their practice. They also support artists, including: AntiVJ, Duncan Speakman and Luke Jerram.

Current Media Labs and the rise of the ‘HackLab’

As we have seen, some labs have been nomadic or temporary while others have evolved into new incarnations. A media lab might be part of an array of dependencies with institutional responsibilities i.e. Folly, Isis Arts, Lighthouse, Pavilion, Pervasive Media Lab, PVA, Vivid and more, all of which regularly produce an abundance of quality experimentation in Digital art and culture. While new incarnations of the media lab may respond to three distinct but related phenomena: the rapidly evolving technology sector; the transient networks of geeks and digital experimenters; the need for sustainable models for innovation in industry.

MadLab, in Manchester, provides space and facilitates meetings and workshops for ‘geeks, artists, designers, illustrators, hackers, tinkerers, innovators and idle dreamers’. Their ‘drop in’ events, commonly known as ‘Hacklabs’ (for example *Hack to the Future* during the Edinburgh International Science Festival), give people instant hands-on experience with all sorts of code and kit. Although hacking is still seen as a specialist and somewhat murky activity, the term is being increasingly decoupled from its conventional criminal associations and made accessible to mainstream arts territory. In January 2011 the Royal Opera House facilitated a ‘Culture Hack Day’, bringing cultural organisations such as the Crafts Council and UK Film Council together with software developers and creative technologists to usefully open up and share data. Other HackLabs may have less of an arts focus, but do have impressive resources built using the open membership model (pioneered by the likes of Backspace). The London Hackspace boasts a laser cutter, digital oscilloscope and kiln, all donated or collectively purchased.

Scattered through many of our city centres are office/studio-based working spaces which cater to the creative industries by offering flexible working environments and abundant networking and training opportunities. The Hub, in London’s Islington and Kings Cross areas (with up to thirty further Hubs in cities across the globe), gives fee-paying members access to facilities and a way of working orientated towards connecting people from across the network in cost-effective innovation. These spaces are indicative of the emphasis placed on the creative economy as the big hope for economic renewal driven by small entrepreneurs grabbing and shaping the opportunities in technology, entertainment and design.

Inspirational before Institutional

Looking briefly at some of the ways media labs have operated since the 1990s shows them as uniquely fertile spaces for all manner of shared expertise and creative innovation. They have made a fundamental contribution to Open Source culture. Working as openly and collaboratively as possible, participants have found ways of sharing process and product, while an interdisciplinary nature has revealed a plethora of creative possibilities. Fulfilling a difficult remit by offering a home for many of the emergent artistic practices currently transforming artistic activity, they have led us away from ‘art for art’s sake’ and towards work which has demonstrable meaning and lasting social and economic benefit. Large institutions might be extremely well-versed in mounting financially advantageous blockbuster exhibitions, but the beauty of media labs derives from their ability to develop and disseminate the socially-transformative systems that have already and will continue to shape the future of the arts.

(A big thank you to everyone who contributed to this research despite their incredibly busy schedules and a special shout to: Simon Poulter for pulling over his car, Clive Gillman for kindly kicking things off, Sarah Cook for an innovative approach to note sharing and Peter Ride for not taking a lunch break.)

The Immaterial Aristocracy of the Internet

By Harry Halpin, 5 May 2008

Source URL: http://www.metamute.org/editorial/articles/immaterial-aristocracy-internet

Featured in Mute Magazine (May 2008) Vol 2, No. 8 − Zero Critical Content/No Added Aesthetics - Buy online £5 http://www.metamute.org/shop/magazine/mute-vol-1-no.-18-%E2%80%93-i-am-network

Taking issue with the argument that, after decentralisation, control is embodied within the protocols of networks, Harry Halpin gives a historical account of the all-too-human actors vying for power over the net. Not technical standards but immaterial aristocrats rule cyberspace and their seats of power are vulnerable to revolutionary attack

Images: Theo Michael

Is there anything redeeming in the net? It all seemed so revolutionary not so long ago, but today it appears this revolutionary potential is spent. Is this disillusionment symptomatic of the structure of the net itself? Such is the analysis presented in Alexander Galloway and Eugene Thacker's book, The Exploit. However, I think it is problematic at best to forsake the net’s revolutionary potential at this point. My general impression of Galloway’s previous work Protocol: How Control Exists After Decentralization, is that while it is undoubtedly some of the best work in ‘new media’ studies to be produced in recent years, it leads ultimately not to action but to paranoia. While Galloway notes correctly that protocols ‘are a language that regulates flow, directs netspace, codes relationships, and connects life-forms’, he does not seem to understand that without protocols, communication would be impossible.[1] So while protocols embody and enact the way ‘control exists after decentralisation’, he goes further and concludes that the ‘ruling elite is tired of trees too’ and that due to protocols ‘the internet is the most highly controlled mass media hitherto known’.[2] Unlike his normally lucid (and even occasionally Marx inspired) analysis, towards the end of Protocol Galloway sounds like a conspiracy theorist of the internet. Has he ever tried setting up a pirate radio station to share information, or a public television channel, rather than a website? In the moralising manner characteristic of many American anarchists, all control is viewed as inherently antithetical to any revolutionary project.

Let’s think twice about protocol. Both control and communication are expressed through shared convention; when this entails a voluntarily shared convention, as with a technical communications system that can theoretically transmit any message regardless of its content, then is this really control? Indeed, some minimal organisation, the holding of conventions in common, is necessary for communication to be possible at all, as Davidson and Wittgenstein observed.[3] And if the ‘common’ in communication is necessary for any sort of commons then protocols are necessary and indeed foundational for the emergence of collectivity, including revolutionary kinds. Yet it is far safer to see control as counter-revolution, since this would seem to justify a retreat into critique rather than practice. To his credit, Galloway resists this alternative, and instead posits as revolutionary subject those who seek hypertrophy of the net such as hackers and net artists. But if it seems schizophrenic to think that protocols and networks can be used both within and against capital, then so be it. What is healthier, a schizophrenic out surfing on the net or a paranoiac on the couch?

Instead of the invaluable essay on how networks can be used against networks that I was expecting, a sort of Clausewitz for the modern-age, The Exploit tries to push beyond networks to what its authors call the ‘anti-web’. After spending most of the first half of the book going through increasingly self-affirming reflections, characterising protocol as the source of individuation both in DNA and man made networks, they come to a great conclusion: ‘to be effective, future political movements must discover a new exploit’. Borrowing the hacker term for a piece of software that takes advantage of a bug or glitch, they define the exploit as a resonant flaw designed to resist, threaten, and ultimately desert the dominant political diagram.[4]

While we must agree that something is needed, the ‘counter-protocol’ proposed towards the end of the book comes down to a focus on the ‘quality of interactions’ and, with the figure of the ‘unhuman’, a rather predictable fetishisation of viruses and swarms – phenomena that are hardly incompatible with networks, incidentally. The ‘Note for a Liberated Computer Language’ with which they conclude provides a useless programming language involving constructs like ‘envision’ and ‘obfuscate’; a sort of retreat into neo-surrealism. If one follows this path, one may well end up concurring that ‘the future avant-garde practices will be those of non-existence’.[5] This would lead also to the non-existence of any movement beyond capitalism, since non-existence in communication networks brings depressing isolation rather than the creation of revolutionary collectivity.

I think the problem with The Exploit is encapsulated in its title, which valorises the system-breaking achievement of so-called ‘hackers’, more often than not script kiddies and scammers pursuing financial gain rather than self organisation of the net. Free software pioneer and ‘copyleft’ inventor Richard Stallman illuminatingly describes hacking not as a rejection of humanity, but as the creation of community and the practice of joy:

It was not at all uncommon to find people falling asleep at the lab, again because of their enthusiasm; you stay up as long as you possibly can hacking, because you just don’t want to stop.[6]

The joy of hacking comes more from the creation of something new and clever – including protocols – not simply ‘breaking’ into a system while still maintaining its previous paradigm. Breaking into a system to explore how it works would qualify as hacking, while breaking into a system for commercial gain would not. As Richard Stallman explains, ‘hacking means exploring the limits of what is possible in a spirit of playful cleverness’.[7] What better definition also of a revolutionary? Not surprisingly, hackers are the core of the community that create the protocols of the net.

Galloway is correct to point out that there is control in the internet, but instead of reifying the protocol or even network form itself, an ontological mistake that would be like blaming capitalism on the factory, it would be more suitable to realise that protocols embody social relationships. Just as genuine humans control factories, genuine humans – with names and addresses – create protocols. These humans can and do embody social relations that in turn can be considered abstractions, including those determined by the abstraction that is capital. But studying protocol as if it were first and foremost an abstraction without studying the historic and dialectic movement of the social forms which give rise to the protocols neglects Marx’s insight that

[Technologies] are organs of the human brain, created bythe human hand; the power of knowledge, objectified.[8]

Bearing protocols’ human origination in mind, there is no reason why they must be reified into a form of abstract control when they can also be considered the solution to a set of problems faced by individuals within particular historical circumstances. If they now operate as abstract forms of control, there is no reason why protocols could not also be abstract forms of collectivity.  Instead of hoping for an exodus from protocols by virtue of art, perhaps one could inspect the motivations, finances, and structure of the human agents that create them in order to gain a more strategic vantage point. Some of these are hackers, while others are government bureaucrats or representatives of corporations – although it would seem that hackers usually create the protocols that actually work and gain widespread success. To the extent that those protocols are accepted, this class that I dub the ‘immaterial aristocracy’ governs the net. It behoves us to inspect the concept of digital sovereignty in order to discover which precise body or bodies have control over it.

The Network of Networks

Although popular legend has it that the internet was created to survive a nuclear war, Charles Herzfeld (former director of DARPA, the Defence Advanced Research Projects Agency responsible for funding what became the internet) notes that this is a misconception.

In fact, the internet came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators who should have access to them were geographically separated from them.[9]

The internet was meant to unite diverse resources and increase communication among computing pioneers. In 1962, J.C.R. Licklider of MIT proposed the creation of a ‘Galactic Network’ of machines and, after obtaining leadership of DARPA, he proceeded to fund this project. Under his supervision the initial ARPANet came into being.

Before Licklider’s idea of the ‘Galactic Network’, networks were assumed to be static and closed systems. One either communicated with a network or one did not. However, early network researchers determined that there could be an ‘open architecture networking’ where a meta-level ‘internet working architecture’ would allow diverse networks to connect to each other, so that they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to end service.[10]

This concept became the ‘Network of Networks’ or the ‘internet’ – anticipating the structure of later social movements. While the internet architecture provided the motivating abstract concepts, it did not define at the outset a ‘scalable transfer protocol’ – a concrete mechanism that could actually move the bits from one network to another. Robert Kahn and Vint Cerf devised a protocol that took into account four key factors:

1. Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the internet.

2. Communications would be on a best effort basis. If a packet didn’t make it to the final destination, it would shortly be retransmitted from the source.

3. Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.

4. There would be no global control at the operations level.

The solution to this problem was TCP/IP. Data is subdivided into ‘packets’ that are all treated independently by the network. Any data sent over the internet is divided into relatively equal size packets by TCP (Transmission Control Protocol), which then sends the packets over the network using IP (Internet Protocol). Each computer has an Internet Number, a four byte destination address such as 152.2.210.122, and IP routes the system through various black-boxes, like gateways and routers, that do not try to reconstruct the original data from the packet. At the recipient end, TCP collects the incoming packets and then reconstructs the data. This protocol, which allows large sections of the network to be removed, is the most powerful technological ancestor of the network form of organisation.

While the system is decentralised in principle, in reality it is a hybrid with centralised elements. The key assignment of IP addresses to individual machines (the mapping of domains like http://www.ibiblio.org to an IP address like 152.46.7.122) comes from a hierarchical domain name authority controlled by a centralised body, namely ICANN. Futhermore, this entire process relies on a small number of top-level name servers.

This is a system vulnerable to flaws in the protocols used to exchange domain name information, as exemplified by the Pakistani government’s recent blocking of YouTube. More radically democratic structures of digital sovereignty could probably prevent such blocking in the first place. Indeed, it is the historical origins and function of these bodies of digital sovereignty that need exploration.

The First Immaterial Aristocracy

Although the internet was started by DARPA as a military-funded research project, it soon spread beyond the rarefied confines of the university. Once news of this ‘Universal Network’ arrived, universities, corporations, and even foreign governments began to ‘plug in’ voluntarily. The internet became defined by voluntary adherence to open protocols and procedures defined by internet protocols. The coordination of such world-spanning internet standards soon became a social task that DARPA itself was less and less capable and willing to administer. As more and more nodes joined the internet, the military industrial research complex seemed less willing to fund and research it, perhaps realising that it was slowly spinning out of their control. In 1984 the US Military split its unclassified military network, MILNET, from the internet. No longer purely under the aegis of DARPA, the internet began a political process of self organisation to establish a degree of autonomous digital sovereignty. Many academics and researchers then joined the Internet Research Steering Group (IRSG) to develop a long-term vision of the internet. With the academics and bureaucrats distracted, perhaps, the job of creating standards and maintaining the infrastructure fell into the hands of the hackers of the Internet Engineering Task Force (IETF). Unlike their predecessors, the hackers often did not possess postgraduate degrees in computer science, but they did have an intense commitment to the idea of a universal computer network.

The organisation of the IETF embodied the anarchic spirit of the hackers. It was an ad hoc and informal body with no board of directors, although it soon began electing the members of the Internet Architecture Board (IAB) – a committee of the non-profit Internet Society that oversees and ratifies the standards process of the net. However, the real actor in the creation of protocols was not the IAB or any other bureaucracy, but the Internet Engineering Task Force (IETF).The IETF credo, attributed to the first Chair of the IAB David Clark, is: ‘We reject kings, presidents, and voting. We believe in rough consensus and running code.’ True to its credo, the IETF operates by a radical democratic process. There are no official or even unofficial membership lists, and individuals are not paid to participate. Even if they belong to an organisation they must participate as an individual, and only participate voluntarily. Anyone may join, and ‘joining’ is defined only in terms of activity and contribution. Decisions do not have to be ratified by consensus or even majority voting, but require only a rough measure of agreement on an idea. IETF members prefer to judge an idea by actual implementation (running code), and arguments are decided by the effectiveness of practice. The structure of the IETF is defined by areas such as ‘Multimedia’ and ‘Security’ and then subdivided into Working Groups on particular standards such as ‘atompub’, the widely used Atom standard for syndication of web content. In these Working Groups most of the work of hashing out protocols takes place.

Groups have elected Chairs whose task is to keep the group on topic. Even within the always technical yet sometimes partisan debates, there are no formalities, and everyone from professors to teenagers are addressed by their first name. This kind of informal organisation tends to develop informal hierarchies, and these informal hierarchies are regarded as beneficial since they are composed usually of the most dedicated who volunteer the most of their time for the net: ‘A weekend is when you get up, put on comfortable clothes, and go into work to do your Steering Group work.’If the majority of participants in IETF feel that these informal hierarchies are getting in the way of practical work, then the chairs of Working Groups and other informal bureaucrats are removed by a voting process, which happened once to an entire clique of ‘informal leaders’ in 1992.The IETF is also mainly a virtual organisation since almost all communication is handled by email, although it does hold week-long plenary sessions three times a year which attract over a thousand participants, with anyone welcome. Even at these face-to-face gatherings, most of the truly ground breaking discussions seem to happen in the still more informal ‘Birds of a Feather’ discussions. The most important product of these list-serv discussions and meetings are IETF RFCs ‘Request for Comments’, whose very name demonstrates their democratic practice. These RFCs define internet standards such as URLs (RFC 1945) and HTTP (RFC 3986).The IETF still exists and anyone can ‘join’ by simply participating in a list given on their homepage. The organisation of the IETF operates with little explicit financing, but many members are funded by their governments or corporate sponsors, nevertheless it is still open to those without financing.

The World Wide Web

One IETF participant, Tim Berners-Lee, had the vision of a ‘universal information space’ which he dubbed the ‘World Wide Web’.[11] His original proposal brings his belief in universality to the forefront:

We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics extra facilities.[12]

The IETF, perhaps due to its own anarchic nature, had produced a multitude of incompatible protocols. While protocols could each enable computers to communicate over the internet, there was no universal format for the various protocols. Tim Berners- Lee had a number of key concepts:

1. Calling anything that someone might want to communicate with over the Internet a ‘resource’.

2. Each resource could be given a universal resource identifier (URI) that allowed it to be identified and perhaps accessed. The word ‘universal’ was used to ‘emphasize the importance of universality, and of the persistence of information.’

3. The idea of simplifying hypertext as the emergence of a human-readable format for data over the web, so any document could link to any other document.

These three principles formed the foundation of the World Wide Web. In the IETF, Berners-Lee, along with many compatriots such as Larry Masinter, Dan Connolly, and Roy Fielding, spearheaded development of URIs, HTML (HyperText Markup Language) and HTTP (HyperText Transfer Protocol).  As Berners-Lee says, the creation of protocols was key to the web, ‘Since by being able to reference anything with equal ease,’ due to URIs, ‘a web of information would form’ based on

the few basic, common rules of ‘protocol’ that would allow one computer to talk to another, in such a way that when all computers everywhere did it, the system would thrive, not break down.[13]

In fact, the design of the web on top of the physical infrastructure of the internet is nothing but protocol.[14]

However, Berners-Lee was frustrated by the IETF, who in typically anarchic fashion, rejected his idea that any standard could be universal. At the time a more hierarchical file-research system known as ‘Gopher’ was the dominant way of navigating the internet. In one of the first cases of digital enclosure on the internet, the University of Michigan decided to charge corporate (but not academic and non-profit) users for the use of Gopher, and immediately the system became a digital pariah. Berners-Lee, seeing an opening for the World Wide Web, surrendered to the IETF and renamed URIs ‘Uniform Resource Locators’ (URLs). Crucially, he got CERN (the European Organisation for Nuclear Research) to release any intellectual property rights they had to the web, and he also managed to create running code for his new standard in the form of the first web browser. Berners-Lee and others served primarily as untiring activists, convincing talented hackers to spend their time creating web servers and web browsers, as well as navigating the political and social process of creating web standards. Within a year the web had spread over the world. In what might be seen as another historical irony, years before the idea of a universal political space was analysed by Hardt and Negri as ‘Empire’, hackers both articulated and created a universal technological space.

A Crisis in Digital Sovereignty

In the blink of an eye, adoption of the web sky rocketed and the immaterial aristocracy of the IETF lost control of it. Soon all the major corporations had a website. They sent their representatives to the IETF in an attempt to discover who the power brokers of the internet were, but instead found themselves immersed in obscure technical conversations and mystified by the lack of any formal body of which to seize control. Instead of taking over the IETF, corporations began ignoring it. They did this by violating standards in order to gain market adoption through ‘new’ features. The battle for market dominance between the two largest opponents, Microsoft and the upstart Netscape, was based on an arms race of features supposedly created for the benefit of web users. These ‘new features’ in reality soon led to a ‘lock-in’ of the web where certain sites could only be viewed by one particular commercial browser. This began to fracture the rapidly growing web into incompatible corporate fiefdoms, building upon the work but destroying the sovereignty of the IETF. Furthermore, the entire idea of the web as an open space of communication began to be challenged, albeit unsuccessfully, by Microsoft’s concept of ‘push content’ and channels, which in effect attempted to replicate television’s earlier hierarchical and one- way model on the internet.

Behind the scenes, the creators of the web were horrified by the fractures the corporate browser wars had caused in their universal information space. In particular, Tim Berners-Lee felt like his original dream had been betrayed by corporations trying to create their own mutually incompatible fiefdoms for profit. He correctly realised it was in the long-term interests of both corporations and web users to have a new form of digital sovereignty. With the unique but informal status Berners-Lee enjoyed as the ‘inventor of the Web’(although he freely and humbly admits that this was a collective endeavor), he decided to reconstitute digital sovereignty in the form of the World Wide Web Consortium (W3C).This non-profit organisation was dedicated to

leading the Web to its full potential by developing protocols and guidelines that ensure longterm growth for the Web.[15]

Corporations had ignored the IETF’s slow and impenetrable processes, so Berners-Lee’s moved from a model of absolute to representative democracy. The major corporations would understand and embrace this, and the web would harness their power while preserving its universality. With the rapid growth of the web, Berners-Lee believed that an absolute democracy based on informal principles could not react quickly enough to the desires of users and prevent corporations from fracturing universality for short term gain. Unlike the IETF, which only standardised protocols that were already widely used, the W3C would take a proactive stance to deploy standardised universal formats before various corporations or other forces could deploy them. Berners-Lee was made director for life of the W3C, and ultimately his decision remains final, constituting a sort of strange immaterial monarchy. Since Berners-Lee historically has not used his formal powers as director and approves what the membership says, his importance is minimised. The W3Cmarks the shift from the radical and open anarchy of the IETF to a more closed and representative system.

Digital Sovereignty Returns

W3C membership was open to any organisation, whether commercial, educational, governmental, for-profit ornot for profit. Unlike the IETF, membership came at a price. It would cost $50,000 for corporations with revenues in excess of $50 million, and $5,000 for smaller corporations and non-profits. It was organised as a strict representative democracy, with each member organisation sending one member to the Advisory Committee. However, in practice it allowed hacker participation by keeping its lists public, and allowing non-affiliated hackers to join its Working Groups for free under the ‘Invited Expert’ policy. By opening up a ‘vendor neutral’ space, companies previously ‘interested primarily in advancing the technology for their own benefit’ could be brought to the table. This move away from the total fiscal freedom of the IETF reflected the increasing amount of money at stake in the creation of protocols, and the money needed to run standards bodies. Rather shockingly, when the formation of theW3C was announced both Microsoft and Netscape agreed to join. As a point of pride, Netscape even paid the full $50,000 fee, though they weren’t required to.

Having the two parties most responsible for fracturing the web at the table provided the crucial break through for the W3C. It allowed them to begin standardisation of HTML in a vendor neutral format that would allow web pages to be viewed in any standards compliant browser. Berners-Lee’s cunning strategy to envelop the corporations within the digital sovereignty of the W3C worked:

The competitive nature of the group would drive the developments, and always bring everyone to the table for the next issue. Yet members also knew that collaboration was the most efficient way for everyone to grab a share of a rapidly growing pie.[16]

The original universal vision of the web was inscribed into W3C mission statement: to expand the reach of the web to ‘everyone, everything, everywhere’. Other standards that have been widely used, such as XML, have come out of the W3C. However, with the web growing rapidly in the era of ‘web 2.0’, the W3C itself is seen as slow and unwieldy with a political process too overwhelmed by corporate representatives. With Google’s rise to its new hegemonic position as the premier search engine, the web is increasingly centred around this highly secretive organisation, reminiscent of Microsoft’s monopolisation of the personal computer. Key members of the IAB and other protocol boards like Vint Cerf are also Google employees.

One example of this new political terrain is social networking. The primary way most new users interact with the web is currently torn between Facebook and MySpace, heavily associated with Microsoft and Google respectively. Users and developers for these services are increasingly tired of their data being hoarded by these companies in closed data silos. DataPortability.org represents an effort to open the data, a more anarchic body that may signal a return to the heavily decentralised governance typical of the IETF. In its latest redesign of HTML, the W3C has tried to open itself to a more IETF-like radically democratic process, allowing hundreds of unaffiliated hackers to join for free. The next few years will determine whether the web centralises under either Google or Microsoft, or if the W3C can prevent the next digital civil war. The immaterial aristocracy is definitely changing, and its next form is still unclear. Perhaps, in step with the open and free software movements,as the level of self-organisation of web developers and even users grows and they become increasingly capable of creating and maintaining these standards themselves, the immaterial aristocracy will finally dissolve.

Beyond Digital Sovereignty

This inspection of the social forms, historical organisation, and finances of the protocol-building bodies of the net is not a mere historical excursion. It has consequences for the concrete creation of revolutionary collectivity in the here and now. Many would decry the very idea that such collectivity can be developed through the net as utopian. In the face of imperialist geopolitics masquerading behind the war on terror and rampant accompanying paranoia, such a utopian perspective is revolutionary. Clearly, a merely utopian perspective is not enough, it needs to be combined with concrete action to move humanity beyond capital. One critique of Michael Hardt and Antonio Negri’s concept of ‘the multitude’ as the new networked revolutionary agent is that its proponents have no concrete plan for bringing it from the virtual to the actual. Fashionable post-autonomism in general leaves us with little else but utopian demands for global citizenship and social democratic reforms such as guaranteed basic income. An enquiry into the immaterial aristocracy can help us recognise the social relations that determine the technological infrastructure which enables the multitude’s social form, while not disappearing into ahistoricism.

The technical infrastructure of the web itself is a model for the multitude:

The internet is the prime example of this democratic network structure. An indeterminate and potentially unlimited number of interconnected nodes communicate with no central point of control, all nodes regardless of territorial location connect to all others through a myriad of potential paths and relays.[17]

Our main thesis is that the creation of these protocols which comprise the internet was not the work of sinister forces of control, but the collective work of committed individuals, the immaterial aristocracy. What is surprising is how little empirical work has been done on this issue by political revolutionaries – with a few notable exceptions such as the anarchist, Ian Heavens. Yet the whole development of the internet could easily have turned out otherwise. We could all be on Microsoft Network, and we are dangerously close to having Google take over the web. One can hear the echo of Mario Tronti’s comments on the unsung struggles of the working class:

[…] perhaps we would discover that ‘organisational miracles’ are always happening, and have always been happening.[18]

The problem is not that ‘the hardest point is the transition to organisation’ for the multitude.[19] The problem of the hour is the struggle to keep the non-hierarchical and non-centered structure of the web open, universal, and free so as to further enable the spread of new revolutionary forms of life – although the cost is the continual spread of capital not far behind. The dangers of a digital civil war are all too real, with signs ranging from the great firewall of China, the US military plans revealed in their Information Operation Roadmap to ‘fight the net as it would a weapons system’, to the development of a multi-tier net that privileges the traffic of certain corporations willing to pay more, in effect crippling many independent websites and file-sharing programs. Having radicals participating in open bodies like the W3C and IETF may be necessary for the future survival of the web.

There is no Lenin in Silicon Valley, plotting the political programme of the network revolution. The beauty of the distributed network is that it makes the very idea of Lenin obsolete. Instead of retreating into neo-surrealism as The Exploit does, revolutionaries should be situationists, creating situations in which people realise their own strength through self-organisation. These situations are created not just by street protests and struggles over precarious labour, but through technical infrastructure. One example par excellence would be how the internet enabled the communication networks that created the ‘anti-globalisation’ movement. Of course, nets are not synonymous with revolution or even anti-capitalism, as the use of the net by corporations and governmental bodies to coordinate globalisation far outweighs its use by the ‘anti-globalisation’ movement. Still, given the paucity of any alternative put forward by Galloway and Thacker, the thesis that the very nature of protocol is inherently counter revolutionary seems to be a theoretical dead end. It would be more productive to acknowledge that political battles around net protocols are increasingly important avenues of struggle, and the best weapon in this battle is history. A historical understanding of the protocols of the net can indeed lead to better and more efficient strategic interventions.

‘Hackers’ and net artists’ struggles against protocol are not the only means of liberation. The vast majority of these interventions are unknown to the immaterial aristocracy and those outside the circles of ‘radical’ digerati. Instead, we should see the creation of new protocols as a terrain of struggle in itself. The best case in point might be the creation of the Extensible Messaging and Presence Protocol, which took instant messaging out of the hands of private corporations like AOL and allowed instant messaging to be implemented in a decentralised and open manner. This in turn allowed secure technologies like ‘Off-the-Record’ instant messaging to be developed, a technology that can mean the difference between life and death for those fighting repressive regimes. This protocol may become increasingly important even in Britain, since it is now illegal to refuse to give police private keys for encrypted email. These trends are important for the future of any revolutionary project, and the concrete involvement of radicals in this particular terrain of struggle could be a determining factor in future of the net. Protocol is not only how control exists after decentralisation. Protocol is a how the common is created in decentralisation, another expression of humanity’s common desire for collectivity.

Harry Halpin <hhalpinATibiblio.org> is a researcher at the School of Informatics at the University of Edinburgh specialising in web technologies, and is a Chair of the W3C and a participant in the IETF. He enjoys reading critical theory and new media studies before collapsing to sleep. And he used to live in a tree.

http

://www.ibiblio.org/hhalpin/

Footnotes

[1] Alexander Galloway, Protocol: How Control Exists After Decentralization, Cambridge, MA: MIT Press, 2004, p. 74.

[2] Ibid, pp. 242-243.

[3] In particular, see Ludwig Wittgenstein, Philosophical Investigations translated by G.E.M. Anscombe. Oxford: Blackwell, 1963, and Donald Davidson, ‘On the Very Idea of a Conceptual Scheme’ in Proceedings and Addresses of the American Philosophical Association, Vol.47, 5-20. 1973.

[4] Alexander Galloway and Eugene Thacker, The Exploit: A Theory of Networks, Minneapolis, MN: University of Minnesota Press, 2007, pp. 20-21.

[5] Ibid, 2007, p. 136.

[6] Saul Williams, Free as in Freedom: Richard Stallman’s Crusade for Free Software, O’Reilly Media, 2002, http://www.oreilly.com/openbook/freedom/

[7] Ibid.

[8] Karl Marx, Grundrisse, http://www.marxists.org/archive/marx/works/1857/grundrisse/

[9] B. Leiner, V. Cerf, et al, A Brief History of the Internet, 2003, http://www.isoc.org/internet/history/brief.shtml

[10] Ibid.

[11] Tim Berners-Lee, Information Management: A Proposal, CERN, 1989, http://www.nic.funet.fi/index/FUNET/history/internet/w3c/proposal.html

[12] Ibid.

[13] Tim Berners-Lee, Weaving the Web, Harper Press, 1999, p.4.

[14] Ibid, p.36.

[15] Berners-Lee, http://www.w3.org/Consortium/

[16] Op. cit., 1999, p 138.

[17] Michael Hardt & Antonio Negri, Empire. Cambridge, MA: Harvard University Press, 2000, p. 299.

[18] Mario Tronti, Lenin in England, Classe Operaia (Working Class) No. 1, January 1964, http://www.geocities.com/immateriallabour/trontilenin-in-england.html

[19] Ibid.

InfoEnclosure 2.0

By Dmytri Kleiner & Brian Wyrick, 29 January 2007

Source URL: http://www.metamute.org/editorial/articles/infoenclosure-2.0

Featured in Mute Vol 2, No. 4 − Web 2.0 Man's Best Friendster - Buy online £5

http://www.metamute.org/shop/magazine/mute-vol-2-no.-4-%E2%88%92-web-2.0-mans-best-friendster

The hype surrounding Web 2.0’s ability to democratise content production obscures its centralisation of ownership and the means of sharing. Dmytri Kleiner & Brian Wyrick expose Web 2.0 as a venture capitalist’s paradise where investors pocket the value produced by unpaid users, ride on the technical innovations of the free software movement and kill off the decentralising potential of peer-to-peer production

Wikipedia says that ‘Web 2.0, a phrase coined by O’Reilly Media in 2004, refers to a supposed second generation of internet-based services – such as social networking sites, wikis, communication tools, and folksonomies – that emphasise online collaboration and sharing among users.’

The use of the word ‘supposed’ is noteworthy. As probably the largest collaboratively authored work in history, and one of the current darlings of the internet community, Wikipedia ought to know. Unlike most of the members of the Web 2.0 generation, Wikipedia is controlled by a non-profit foundation, earns income only by donation and releases its content under the copyleft GNU Free Documentation License. It is telling that Wikipedia goes on to say ‘[Web 2.0] has become a popular (though ill-defined and often criticised) buzzword among certain technical and marketing communities.’

The free software community has tended to be suspicious, if not outright dismissive, of the Web 2.0 moniker. Tim Berners-Lee dismissed the term saying ‘Web 2.0 is of course a piece of jargon, nobody even knows what it means.’ He goes on to note that ‘it means using the standards which have been produced by all these people working on Web 1.0.’

In reality there is neither a Web 1.0 nor a Web 2.0, there is an ongoing development of online applications that cannot be cleanly divided.

In trying to define what Web 2.0 is, it is safe to say that most of the important developments have been aimed at enabling the community to create, modify, and share content in a way that was previously only available to centralised organisations which bought expensive software packages, paid staff to handle the technical aspects of the site, and paid staff to create content which generally was published only on that organisation’s site.

A Web 2.0 company fundamentally changes the mode of production of internet content. Web applications and services have become cheaper and easier to implement, and by allowing the end users access to these applications, a company can effectively outsource the creation and the organisation of their content to the end users themselves. Instead of the traditional model of a content provider publishing their own content and the end user consuming it, the new model allows the company’s site to act as the centralised portal between the users who are both creators and consumers.

For the user, access to these applications empowers them to create and publish content that previously would have required them to purchase desktop software and possess a greater technological skill set. For example, two of the primary means of text-based content production in Web 2.0 are blogs and wikis which allow the user to create and publish content directly from their browser without any real need for knowledge of markup language, file transfer or syndication protocols, and all without the need to purchase any software.

The use of the web application to replace desktop software is even more significant for the user when it comes to content that is not merely textual. Not only can web pages be created and edited in the browser without purchasing html editing software, photographs can be uploaded and manipulated online through the browser without the need for expensive desktop image manipulation applications. A video shot on a consumer camcorder can be submitted to a video hosting site, uploaded, encoded, embedded into an HTML page, published, tagged, and syndicated across the web all through the user’s browser.

In Paul Graham’s article on Web 2.0 he breaks down the different roles of the community/user into more specific roles, those being the Professional, the Amateur, and the User (more specifically, the end user). The roles of the Professional and the User were, according to Graham, well understood in Web 1.0, but the Amateur didn’t have a very well defined place. As Graham describes it in ‘What Business Can Learn From Open Source’, the Amateur just loves to work, with no concern for compensation or ownership of that work; in development, the Amateur contributes to open source software whereas the Professional gets paid for their proprietary work.

Graham’s characterisation of the ‘Amateur’ reminds one of If I Ran The Circus by Dr. Suess, where young Morris McGurk says of the staff of his imaginary Circus McGurkus:

My workers love work. They say, ‘Work us! Please work us!We’ll work and we’ll work up so many surprisesYou’d never see half if you had forty eyses!’

And while ‘Web 2.0’ may mean nothing to Tim Berners-Lee, who sees recent innovations as no more than the continued development of the web, to venture capitalists, who like Morris McGurk daydream of tireless workers producing endless content and not demanding a pay cheque for it, it sounds stupendous. And indeed, from YouTube to Flickr to Wikipedia, you’d truly never see half if you had forty eyses.

Tim Berners-Lee is correct. There is nothing from a technical or user point of view in Web 2.0 which does not have its roots in, and is not a natural development from, Web 1.0. The technology associated with the Web 2.0 banner was possible and in some cases readily available before, but the hype surrounding this usage has certainly affected the growth of Web 2.0 internet sites.

The internet (which is more than the web, actually) has always been about sharing between users. In fact, Usenet, a distributed messaging system, has been operating since 1979! Since long before even Web 1.0, Usenet has been hosting discussions, ‘amateur’ journalism, and enabling photo and file sharing. Like the internet, it is a distributed system not owned or controlled by anyone. It is this quality, a lack of central ownership and control, that differentiate services such as Usenet from Web 2.0.

If Web 2.0 means anything at all, its meaning lies in the rationale of venture capital. Web 2.0 represents the return of investment in internet startups. After the dotcom bust (the real end of Web 1.0) those wooing investment dollars needed a new rationale for investing in online ventures. ‘Build it and they will come’, the dominant attitude of the ’90s dotcom boom, along with the delusional ‘new economy’, was no longer attractive after so many online ventures failed. Building infrastructure and financing real capitalisation was no longer what investors were looking for. Capturing value created by others, however, proved to be a more attractive proposition.

Web 2.0 is Internet Investment Boom 2.0. Web 2.0 is a business model, it means private capture of community-created value. No one denies that the techology of sites like YouTube, for instance, is trivial. This is more than evidenced by the large number of identical services such as DailyMotion. The real value of YouTube is not created by the developers of the site, but rather it is created by the people who upload videos to the site. Yet, when YouTube was bought for over a billion dollars worth of Google stock, how much of this stock was acquired by those that made all these videos? Zero. Zilch. Nada. Great deal if you are an owner of a Web 2.0 company.

The value produced by users of Web 2.0 services such as YouTube is captured by capitalist investors. In some cases, the actual content they contribute winds up the property of site owners. Private appropriation of community created value is a betrayal of the promise of sharing technology and free cooperation.

Unlike Web 1.0, where investors often financed expensive capital acquisition, software development and content creation, a Web 2.0 investor mainly needs to finance hype-generation, marketing and buzz. The infrastructure is widely available for cheap, the content is free and cost of the software, at least that much of it that is not also free, is negligible. Basically, by providing some bandwidth and disk space, you are able to become a successful internet site if you can market yourself effectively.

The principal success of a Web 2.0 company comes from its relationship to the community, more specifically, the ability of the company to ‘harness collective intelligence’, as O’Reilly puts it. Web 1.0 companies were too monolithic and unilateral in their approach to content. Success stories of the transition from Web 1.0 to Web 2.0 were based on the ability for a company to remain monolithic in its brand of content, or better yet, its outright ownership of that content, while opening up the method of that content’s creation to the community. Yahoo! Created a portal to community content while it remained the centralised location to find that content. EBay allows the community to sell its goods while owning the marketplace for those goods. Amazon, selling the same products as many other sites, succeeded by allowing the community to participate in the ‘flow’ around their products.

Because the capitalists who invest in Web 2.0 startups do not often fund early capitalisation, their behaviour is markedly more parasitic as well. They often arrive late in the game when value creation already has good momentum, swoop in to take ownership and use their financial power to promote the service, often within the context of a hegemonic network of major, well financed partners. This means that companies that are not acquired by venture capital end up cash starved and squeezed out of the club.

In all these cases, the value of the internet site is created not by the paid staff of the company that runs it, but by the users who use it. With all of the emphasis on community created content and sharing, it’s easy to overlook the other side of the Web 2.0 experience: ownership of all this content and ability to monetise its value. To the user, this doesn’t come up that often, it’s only part of the fine print in their MySpace Terms of Service agreement, or it’s the Flickr.com in the url of their photos. It doesn’t usually seem like an issue to the community, it’s a small price to pay for the use of these wonderful applications and for the impressive effect on search engine results when one queries one’s own name. Since most users do not have access to alternative means to produce and publish their own content, they are attracted to sites like MySpace and Flickr.

Meanwhile, the corporate world was pushing a whole different idea of the Information Superhighway, producing monolithic, centralised ‘online services’ like CompuServe, Prodigy and AOL. What separated these from the internet is that these were centralised systems that all users connect directly to, while the internet is a peer-to-peer network, every device with a public internet address can communicate directly to any other device. This is what makes peer-to-peer technology possible, this is also what makes independent internet service providers possible.

It should be added that many open source projects can be cited as the key innovations in the development of Web 2.0: free software like Linux, Apache, PHP, MySQL, Python, etc. are the backbone of Web 2.0, and the web itself. But there is a fundamental flaw with all of these projects in terms of what O’Reilly refers to as the Core Competencies of Web 2.0 Companies, namely control over unique, hard-to-recreate data sources that get richer as more people use them – the harnessing of the collective intelligence they attract. Allowing the community to contribute openly and to utilise that contribution within the context of a proprietary system where the proprietor owns the content is a characteristic of a successful Web 2.0 company. Allowing the community to own what it creates, though, is not. Thus, to be successful and create profits for investors, a Web 2.0 company needs to create mechanisms for sharing and collaboration that are centrally controlled. The lack of central control possessed by Usenet and other peer controlled technologies is the fundamental flaw. They only benefit their users, they do not benefit absentee investors, as they are not ‘owned’.

Thus, because Web 2.0 is funded by Capitalism 2006, Usenet is mostly forgotten. While everybody uses Digg and Flickr, and YouTube is worth a billion dollars, PeerCast, an innovative peer-to-peer live video streaming network that has been in existence for several years longer than YouTube, is virtually unknown.

From a technological stand point, distributed and peer-to-peer (P2P) technologies are far more efficient than Web 2.0 systems. Making better use of network resources by using the computers and network connections of users, P2P avoids creating bottlenecks created by centralised systems and allows content to be published with less infrastructure, often no more than a computer and a consumer internet connection. P2P systems do not require the massive data centres of sites such as YouTube. The lack of central infrastructure also comes with a lack of central control, meaning that censorship, often a problem with privately-owned ‘communities’ that frequently bend to private and public pressure groups and enforce limitations on the the kinds of content they allow. Also, the lack of large central cross-referencing databases of user information has a strong advantage in terms of privacy.

From this perspective, it can be said that Web 2.0 is capitalism’s preemptive attack against P2P systems. Despite their many disadvantages in comparison to these, Web 2.0 is more attractive to investors, and thus has more money to fund and promote centralised solutions. The end result of this is that capitalist investment flowed into centralised solutions making them easy and cheap or free for non-technical information producers to adopt. Thus, this ease of access compared to the more technically challenging and expensive undertaking of owning your own means of information production created a ‘landless’ information proletariat ready to provide alienated content-creating labour for the the new info-landlords of Web 2.0.

It is often said that the internet took the corporate world by surprise, coming as it did out of publicly funded university and military research. It was promoted by way of a cottage industry of small independent internet service providers who were able to squeeze a buck out of providing access to the state-built and financed network.

The internet seemed anathema to the capitalist imagination. Web 1.0, the original dotcom boom, was characterised by a rush to own the infrastructure, to consolidate the independent internet service providers. While money was thrown around quite randomly as investors struggled to understand what this medium would actually be used for, the overall mission was largely successful. If you had an internet account in 1996 it was likely provided by some small local company. Ten years later, while some of the smaller companies have survived most people get their internet access from gigantic telecommunications corporations. The mission of Internet Investment Boom 1.0 was to destroy the independent service provider and put large, well financed, corporations back in the driving seat.

The mission of Web 2.0 is to destroy the P2P aspect of the internet. To make you, your computer, and your internet connection dependent on connecting to a centralised service that controls your ability to communicate. Web 2.0 is the ruin of free, peer-to-peer systems and the return of monolithic ‘online services’. A telling detail here is that most home or office internet connections in the ’90s, modem and ISDN connections, were synchronous – equal in their ability to send and receive data. By design, your connection enabled you to be equally a producer and a consumer of information. On the other hand, modern DSL and cable-modem connections are asynchronous, allowing you to download information quickly, but upload slowly. Not to mention the fact that many user agreements for internet service forbid you to run servers on your consumer circuit, and may cut off your service if you do.

Capitalism, rooted in the idea of earning income by way of idle share ownership, requires centralised control, without which peer producers have no reason to share their income with outside shareholders. Capitalism, therefore, is incompatible with free P2P networks, and thus, so long as the financing of internet development comes from private shareholders looking to capture value by owning internet resources, the network will only become more restricted and centralised.

It should be noted that even in the case of commons-based peer production, so long as the commons and membership in the peer group is limited, and inputs such as food for the producers and the computers that they use are acquired from outside the commons-based peer group, then the peer producers themselves may be complicit in the exploitative capturing of this labour value. Thus in order to really address the unjust capture of alienated labour value, access to the commons and membership in the peer group must be extended as far as possible toward the inclusion of a total system of goods and services. Only when all productive goods are available from commons-based producers can all producers retain the value of the product of their labour.

And while the information commons may have the possibility of playing a role in moving society toward more inclusive modes of production, any real hope for a genuine, community enriching, next generation of internet-based services is not rooted in creating privately owned, centralised resources, but rather in creating cooperative, P2P and commons-based systems, owned by everybody and nobody. Although small and obscure by today’s standards, with it’s focus on peer-to-peer applications such as Usenet and email, the early internet was very much a common, shared resource. Along with the commercialisation of the internet and the emergence of capitalist financing comes the enclosure of this information commons, translating public wealth into private profit. Thus Web 2.0 is not to be thought of as a second-generation of either the technical or social development of the internet, but rather as the second wave of capitalist enclosure of the Information Commons.

Virtually all of the most used internet resources could be replaced by P2P alternatives. Google could be replaced by a P2P search system, where every browser and every webserver were active nodes in the search process; Flickr and YouTube could also be replaced by PeerCast and eDonkey type applications, which allow users to use their own computers and internet connections to collaboratively share their pictures and videos. However, developing internet resources requires the application of wealth, and so long as the source of this wealth is finance capital, the great peer-to-peer potential of the internet will remain unrealised.

Dmytri Kleiner <dk AT haagenti.com> is an anarchist hacker and a co-founder of Telekommunisten, a worker-owned technology company specialising in telephone systems. Dmytri is a USSR-born Canadian, currently living in Berlin with his wife Franziska and his daughter Henriette

Brian Wyrick <brian AT pseudoscope.com> is an artist, film maker and web developer working in Berlin and Chicago. He also co-founded Group 312 Films, a Chicago-based film group, and posts updates regarding his projects and adventures at

http://www.pseudoscope.com

Special Insert: Net.Politics (The revolution shall not be criticised?)

By Mute Editor, 1998

Source URL: http://www.metamute.org/editorial/articles/special-insert-net.politics-revolution-shall-not-be-criticised

Featured in Mute Vol 1, No. 11 – Vote Now! Net. Sex, Net. Money, Net. Work - Buy online £6

http://www.metamute.org/shop/magazine/mute-vol-1-no.-11-%E2%80%93-vote-now-net.-sex-net.-money-net.-work

II The revolution shall not be criticised? In response to ISEA98 Micz Flor, organiser of Revolting temporary media laboratory, asks "why now, why revolution?" Is the current popularity of the term and its associated icons anything more than Middle Youth talking to itself in the latest of a long line of fashionable lingos?

IV Net.Politics Q&A "What does the Net mean for politics?" Anarchists, nazis, extropians, pornographers, sex-crazed teenagers, book-worm teenagers, budgerigar fanatics, isolated octogenarians, hairdressers - you name it, the Net is home to them all. Or is it? Who gains ascendance within cyberspace? Who has the power in this, the latest technological utopia? Is the Net just a tool or is that popular description just another disingenuous trick - the powerful letting the powerless play with hand-me-down toys while they get on with more serious business. We asked a not-so-random selection of net users what they think. With an introduction by McKenzie Wark

X The other Tony B. As we watch the social movements of the 21st come into being, we wonder about those of the 20th century. Hari Kunzru talks to Tony Benn about the role of recording and broadcasting technologies in the greater scheme of things.

XIV The Nineteenth Century and the future of technocultures in India The morning after lasts a long time. India's nuclear detonation has caused a sea change in its political life. It has also highlighted the national schisms and global alliances among the country's different techno-scenes. After the demise of public politics, Ravi Sundaram asks what is taking its place.

XX Pancapitalism and Panacers Recently, Manuel Castells came to London and talked about his much-praised book The Rise of the Network Society. Martin Harris thinks that, if we look closely enough, this star may have more to tell us than his disappointing talk suggested.

XXIII Eyes, ears, mouths and media: Cyberns cross the borders of internal exile What do you do in war-torn ex-Yugoslavia when you realise that small talk and small media can change lives? Branka Davic, founder of mailing list Cyberns, talks about her experience of networks, political exile and internal emigration.

Let's be realistic! A blockbuster electronic arts event-slash-symposium called Revolution is unrealistic (link #isea98). Still, it is anything but unexpected. Over the last few years several trends have developed which - if followed through - make the appropriation of such a dramatic word for radical change more understandable.

Firstly, the momentum associated with the social uprising of the late 60s has been transformed into social romanticism and introduced deep into popular culture. The French philosophical and political heritage of '68 has been essential to the kool theories of the 1980s and continues to be fashionable - alongside D&G - in the 1990s (link #The Holy Fools). After the killer cynicism of the last decade, revolution is hip again. Thirty years on in European history, throwing a brick into the social hierarchy has been aestheticised. Throwing the Molotov theme party today does little more than deliver hobby politics into the social life of Middle Youth.

Political action outside the parliamentary system turned sour in the 70s with an increase in terrorist actions. Radical activism, fundamentalist politics and direct democracy not only split the left outside of the parliamentary system, but also created cracks running deep through elected parties, as was the case with the German Green party in the late 80s and early 90s. Today's romantic attitudes towards the student and workers' riots mean nothing when detached from their political motivations, especially when they are also divorced from their subsequent history. Investigating the assimilation of anti-establishment iconography into the new market strategies might be helpful for understanding some of the recent cultural shifts in the New Britain - but it certainly stalls enthusiasm for revolution98...

Secondly, the 'Digital Revolution' has been announced. The fashionable transfer of notions of radical change from the sphere of the social sciences to those of technological advancement makes one question the reliability of the concept of revolution as such. As for revolutionary change within societies: attempts to define a universal check-list for 'The Revolution' have failed. Common sense now tells us that no attempt to describe change in unique and idiosyncratic systems is capable of creating a yardstick for qualifying transformation as revolution.

Where, then, does that leave the 'Digital Revolution'? With no grounds for objective definitions, radical change might best be defined by its subjects. Following the parameters of intersubjectivity, revolution might adequately be described as a dramatic change which forces the individuals within a system to renegotiate their roles. But, from that point of view, it obviously becomes ridiculous to pin down 'a revolution' to an empty technological framework. In the case of The Digital Revolution, then, it is clear that there has not been a revolution for the simple reason that nobody attended.

Finally, the battlegrounds of subversion have allegedly re-located to the digital (and analogue) realms of networked technologies. During the 80s, 'hacking' came to be regarded as a possible cause of atomic war - caused by some fourteen year old playing with a public telephone and a hair clip. Our public space has been extended into networked media, and some nurture the idea that the streets have become altogether obsolete as a battleground for political struggle. Today, some tactical media operations are prime targets of CIA and FBI monitoring activities, seemingly proving the economic threat of such attacks. But, put into perspective, it becomes questionable whether their terrorist action retains any real revolutionary potential.

Some members of the old-time hacker/anarcho scene are currently pulling out of the Internet - dismissing its currency as a tool for radical change. It has been argued that increasing commercialisation has blunted the tool. Relevant points of intervention have been washed away by millions upon millions of AOL supporters. Also, the increasing finesse of networked surveillance in the business sector and the increase of customer and lifestyle databases more than outweigh the dangers of terrorism. So how does the establishment feel about the threat posed by the internet guerrillas? In the form of the Committee on Culture, Media and Sport, it writes that "over time, public sector regulation of content will become increasingly difficult; technology will erode the state's capacity to intervene" (Fourth Report on content regulation in the Internet). Even though this statement does not directly concern itself with subversion from within the networks, it is quite telling that the government's worries are directed towards the future, whereas the small online community of today appears negligible. Hardcore net activists have moved their battlegrounds since the mythical mid 1990s, yet their natural opponent - the state - feels that the real danger is about to come, possibly in 2005 or 2010. It seems more like the eye of the storm than a revolution.Where does that leave '98? This is certainly neither the time nor the place for biased propaganda and innovative market strategies. Drop the euphoria and let's be realistic...

Revolting hangs up on the Revolution Master Narrative and dials again. Revolting is a temporary media laboratory, built upon new modes of collaborative and process-oriented work in culture, politics, art and media activism. It will extend the social space of the workshop into the digital realm of the Internet and - vice versa - concentrate the free-floating nature of networked technologies by tying them into the social environment of Revolting.

Revolting wants to focus on the realities of media practice. It will test-run the individual and challenging approaches of (politically) motivated concepts for contemporary technologies. Turning 'practice to policy' not only presents promising alternatives to the dominant codes of conduct, but also seems necessary for today's media practice.

Micz Flor, Manchester 07_98 Xmicz AT yourserver

.co.uk X [www.yourserver.co.uk/revolting]

(with thanks to Richard Barbrook, Josephine Berry, Martin Conrads and Pauline van Mourik Broekman)

IV Net.Politics and the Virtual Republic by McKenzie Wark

There are at least three kinds of net.politics - although perhaps there are others not yet invented.

The first kind are struggles that take place outside the Net about how the Net is regulated and run. The struggle over the Communications Decency Act in the United States, for instance. The Net becomes a thing about which there is a public struggle, but a struggle in which it is the traditional spaces of public life that are involved.

A second kind of net.politics are those struggles taking place outside the Net for which the Net becomes a public place of information sharing, decision making and for the relay of calls to action. In the struggle to oust former Indonesian President Suharto, the student democratic movements made good use of net.politics to this end. Denied a place in the public world of the traditional media, the students created their own counter public sphere on the Net, one in which the things that mattered to them as public matters could be named and debated. This had a strong impact on the course of the anti-Suharto campaign, and perhaps pushed other public spaces, such as the ruling Golkar party and the press, to adopt public issues the students had identified.

But there is also a third kind of net.politics, one that is entirely specific to the Net itself. At first sight, it may seem rather insignificant. Net.politics on this scale usually consists of tiny little debates, decisions and squabbles, about things of significance to only a few people who happen to be occupying a particular vector within the Net - the nettime list, for instance. In the long run this microscopic kind of net.politics is potentially the most significant force for change, both within the Net and in public life at large.

What is happening on the Net is that a whole generation is discovering that politics exists. Politics is the life of the polis, the interaction of people, asserting their claims or hearing the claims of others upon them. A public is any assembling of people who engage in the life of the polis, interacting with others to achieve the least bad result for themselves and for others. Net.politics means the creation of new publics, the reinvention of the polis: a bottom up, grass roots creation of a new republic.

By 'bottom up' I don't just mean the activity of the 'free' market. There is an idea some would impose on the Net that this is the only kind of bottom up activity. But the production of different, competing, cooperating kinds of self-motivating activity is a part of net.politics itself. Net.politics is, among other things, a public struggle over what kinds of 'bottom up' process can exist.

The republic is the 'public thing', but it also means the 'public reality'. It is through their interactions in public, as a public, that people nominate what matters to them, and also what is real to them. Net.politics means the creation not only of new publics, but also of new public realities. It's a self education in how to be a citizen who is qualified to leave the 20th century and strike out into the next.

The only thing this kind of net.politics is not about is the recreation of the dream of the 18th century 'public sphere'. If anything, net.politics of this everyday, microscopic kind shows why that was always a myth. People do not come together as universal rational beings in public life. They come together as particular beings with rational, emotional and rhetorical capacities.

Look at the transcript of any news group or listserver and what you see is always something more akin to Jean-Francois Lyotard's proliferating 'language games' than to 'communicative rationality'. And this is no bad thing. There may be many kinds of affective and perceptive skill that republican life requires.

So rather than a public sphere, net.politics creates what I call a virtual republic. Virtual in the sense that the possibility of collective self-invention emerges out of the nomination of public things and the advancement of agendas for action based on the creation of such realities. Net.politics of the smallest and most microscopic kind, little day to day interactions, creates this possibility of imagining, negotiating and implementing futures.

People are teaching themselves how to be citizens again, but not citizens as imagined by state-sponsored civics education, nor as imagined by techno-utopian boosters. Rather, citizenship is reinventing itself for itself, free from ideal models of what it should be like. The results aren't always pretty - a flame war is public life gone wrong. But the potential is slowly, slowly emerging for a new kind of polis, a new kind of public, a new kind of political life. Or, most simply, a new kind of life, for humans are still what Aristotle called zoon politikon - the political animal.

McKenzie Wark is the author of The Virtual Republic (Allen & Unwin). [www

.mcs.mq.edu.au/~mwark] XMckenzie.Wark AT mq.edu.auX

Douglas Davies 5:15:23 pm July 21, begin NYC. This morning I sung my song for about 200 artists, digitalists, designers from Kyoto. They dressed and laughed more American than I did. Later I call the White House to confirm that Susan Flynn-Hummel and I can indeed link real and virtual hands across the world on December 31, 1999. We decide to call it "earthshaking". Nobody at the post-Monica White House doubts it. Tonight at dinner Susan and I will refine further the theory of quantum teleportation, which implies an unsponsored visit to the edge of the universe, at least in IBM theory. THIS LIFE IS MAD. As William Blake was mad in his songs, as Artaud was mad in his plays, as innovation has always been mad. No, net.politics has no link to any politics we have ever known because the players - the voters - are totally different. More than half the human race, once silent - women - is now vocal. Once they give up trying to behave more boorishly than men, once they lay their own sacred truths upon us, the world will change. They will do it through being-in-communication. I am such a communication hybrid, half here, half there, half man, woman, partly flying brain. So are you. This situation has no precedent in human history and nobody knew it was coming. Shake the past. Study and enjoy, but do not revere it. Neither Michelangelo nor Marx nor Einstein nor Thatcher can provide any lessons about tomorrow, which is right now. Today is raring to bite or kiss you in your private parts when you least expect it......... 5:36:52pm, July 21, NYC.

Douglas Davis is a telecommunications artist and author of Art and the Future. Sentence, 1994 to infinity[math

240.lehman.cuny.edu/art] Xdmd AT echonyc.comX

See also: "Dialogues with the Machine", p. 8/9

Vladimir Muzhesky It may sound a little bit far fetched, but I would doubt the authenticity of media as a phenomenon-in-itself. It seems that social institutions, both alternative and mainstream, load media with PR patterns and thus translate what media could have been into what media have to become from the point of view of social law, practice and folk mythology.

Before there is a theory which defines a node-to-socium protocol there is no law we can follow. Before we define ourselves and re-define the area as itself, there is no logic to follow.

Up to now, there has been no content - rather the simulation of yet nonexistent network standards. There are workshops, think tanks, conferences, but there are no axioms, definitions, strategies.

Vladimir Muzhesky is an artist. Xbasicray AT thing.netX

Esther Dyson While television is a great medium for propaganda, the Net is a great medium for conspiracy. For good or bad, the Net helps fringe opinions find adherents and then gives the adherents a voice. In a world where only the official story is allowed, the Net can be a liberator. In a world where discussion is already free, the Net can be a medium for cranks and crazy theories - but also for debunking them. Thus, in politics, it fosters discussion so that - ideally - consensus can arise. The idea is for ideas to win by making sense and gathering adherents, rather than just by collecting votes without real discussion.

In the end, the Net offers great value because it fosters decentralisation even more than democracy does. The challenge is to keep decentralisation from becoming fragmentation.

Esther Dyson is chairman of EDventure Holdings and author of Release 2.0: A design for living in the digital age (Penguin/Viking) Xedyson@edventure.comX [www

.edventure.com]

Felipe Rodriguez Governments struggle to keep pace with the Internet. Numerous attempts at government censorship have failed. Information that is illegal in one country but legal in another can be made available on the Internet, regardless of national law. Governments' attempts to censor information have been circumvented by copying the information to many different locations on the Net.

The individual has gained more freedom of choice to access any resources him/herself, whereas in the past governments had much more power to restrict such choices. But this individual freedom may be temporary. More effective ways to censor information on the Net are being invented and stimulated by our governments. Market forces are granted power to police individuals on the Net under the guise of 'industry self-regulation', preventing publication of information without proper legal procedures.

There is a global coalition of governments trying to restrict the use of strong encryption by individuals, in order to be able to collect and read people's e-mail. Despite the information revolution, we are closer to a Big Brother society than ever before.

There is a growing need for artists, activists and others in every country to protect their privacy, freedom of speech and freedom to access information on the Internet. Become active, become vocal!

Felipe Rodriguez Xfelipe AT xs

4all.nl X[www.xs4all.nl]

Geert Lovink A Circular Dream on the Cybernetic Situation The question of net.politics arises after the sound and fury of the commodified spectacle has faded away. Here, besides the void of consumerism, we can ignore dead-end postmodern fatigue and make space for numerous, far more insistent voices. Take the passage through the mirrors of arbitrary identities, styles and gadgets and live the power of electronic certainty. Scattered public discontent is finding new forms of organisation, through a fearlessness towards all forms of control, an indifference to cynical deconstruction and a deter-mination to join forces.

Beyond the dialectics of the 'real' and the 'virtual' there are networked jubilees, tactical gala events, affectionate cyberspatial gatherings, and nastier forms of electronic resistance, all of which are unaffected by the Laws of Infotainment. Hot zones of useless data. But it will take a while to overcome the damage caused by the regime of political correctness. PC's internal policing has installed a culture of suspicion and surveillance, effectively keeping people from expressing their unaccustomed, spasmodic anger as soon as unskirtable contradictions arise.

Currently, dissent is being monitored by NGO and media professionals who have assumed the task of speaking out previously shouldered by political parties, trade unions and 'new' social movements. This type of 'perception management' can easily be smashed or, even better, ignored. Its profound, ongoing misunderstanding of the Net is an encouraging sign. The nomadic hedonism of raves has been contained, like the political rebellion of previous decades, by reducing it to a pop fashion commodity. But, despite these processes, it still has a sting in the tail. Don't believe the (hype of the) Fall. Disillusionment is no longer the tragic end but the 'human condition'. Realism and pragmatism are not just the fall-out of a decayed idealism - they have grown into major ideologies.

There must be ways to exit the logic of the sell-out's eternal return. Transformation should be possible without being absorbed into the culture of business: ways of speaking about processes of 'growth', anti-careerism, sovereign forms of agitation, illegal models of finance, large scale com-munication guerrilla, mass protests within the boundaries of the Net. Macro politics with dirty hands, supporting a margin of revolutionary spirits, and vice versa.

Politics in the digital age first of all means a widespread awareness of the economic and political conditions under which we communicate. Telematerialism: Kittlerism for the masses, Virilio for president. No cheap promises, no elitist pessimism, no popularist conspiracy talk. Instead, a worldwide campaign (and debate) about standards and protocols. Let us stop complaining about monopolies such as Microsoft, AOL or UUNet. Instead, we can claim bandwidth, hijack satellites, realise free, public content against copyright for the few, frustrate electronic commerce systems.

I know: media activism, old school. Still, there is a need for dialogue, a growing sense of solidarity among media-aware groups and individuals, an urge to stop terrifying state control and corporate takeover, to defend the free territories before it's too late. For in the end, the paper tigers are not that powerful, as Mao used to say. We should not overestimate them: they are real and paper at the same time, and this counts especially in the days of virtuality.

Geert Lovink is co-moderator of nettime-l. Xgeert AT xs

4all.nl X [www.Desk.nl/~nettime]

Peter Leyden Here's the basic dynamic of political economy - straight out of Marx. It's as true now as ever. You change the technology at the basis of a society and it fundamentally changes the economy. When you fundamentally change the economy, it isn't long before the politics begin to change as well. In our era, we've laid down the new technology and the economy is morphing beyond recognition. That's a done deal - it's simply working its way through the global economy from its beachhead in the United States and, now, Northern Europe. So, right on cue, we're starting to see the impact of this new tech and this new economy on politics. The newly empowered players rooted in this new tech and new economy are starting to flex their muscles, play politics and shape the world in their own image. I could point out examples of this all over the place, particularly at ground zero in Northern California. But frankly, I won't. I'd rather point out the next iteration of that dynamic: nothing short of forcing change on a civilisational level. (Why dabble in changing politics? Let's go all the way and change civilisation.) I think the world is now moving to a stage where we will be so technologically interconnected, and so economically interdependent, that we will create the conditions for the birth of a new kind of civilisation, a global civilisation. Over the course of the 21st century (the glacial pace of civilisation building) we will create a world where human beings, regardless of where they grow up and live, will essentially experience a life that has more in common with everyone on the planet than with some cultural subset rooted in a geographic area. Blame the Net, or praise it - as I do. It will be a good thing. But then, I'm an optimist, a believer in the really Long Boom.

Peter Leyden is founder of the Global Business Network and co-author of "The Long Boom". Xleyden AT gbn.orgX [www

.gbn.org] "The Long Boom": [www.wired.com/wired/5.07/longboom.html]

Konrad Becker & Marie Ringler "What does the Net mean for politics?"

It could mean getting into trouble for speaking up against censorship on the Net...

Public Netbase t0 is a non-profit Internet service provider. Along with being an internationally acclaimed content developer, the organisation also runs a comprehensive event and information program in the Viennese Museumsquartier.

On July 7th this year Public Netbase was wrongly accused of distributing pornography on the Net by Mr. Haider, leader of the right-wing Austrian Freedom Party (FPO). The grounds for these unfounded allegations were a series of public events organised by Public Netbase in May 1998 entitled "sex.net - Sex, Lies & the Internet" [www.t0.or.at/event/sex-index.html]. The programme critically examined the issues of pornography, censorship and the Internet from a feminist perspective. As evidence, the FPO presented print-outs from a commercial website with the address [www.sex.net]. Because of the similarity of the site's name to Public Netbase's programme, the FPO concluded that Public Netbase was the publisher of this site.

At a press conference on 14th July, Public Netbase dismissed the accusations and announced that it would be taking legal action against Mr. Haider. Within an hour Mr. Haider staged a press conference and released a press bulletin in which the earlier attacks on Netbase were aggressively repeated and then threatened to refer the issue to the Federal Chancellery. Next, Mr. Haider announced the beginning of a campaign against this and child pornography and his intention to make a publication of 'Degenerate Art' (i.e. artworks funded by public bodies which feature sexual and/or pornographic acts).

The Freedom Party started to attack authors and playwrites in the early 90s and has been stepping up its hostility towards the arts and cultural scene ever since. Only two years ago a dozen well established 'decadent' artists were singled out on larger than life billboards and branded for allegedly being in line with the Social Democrats (SPO).

Konrad Becker & Marie Ringler manage Public Netbase. Xkonrad AT t0.or.atXXmarie AT t0.or.atX

Saskia SassenToday the Internet is no longer what it was in the 1970s or 1980s; it has become a contested space with considerable possibilities for segmentation and privatisation. We cannot take its democratic potential as a given simply because of its interconnectivity. And we cannot take its 'seamlessness' as a given simply because of its technical properties. This is a particular moment in the history of digital networks, one when powerful corporate actors and high performance networks are strengthening the role of private digital space and altering the structure of public digital space. Digital space has emerged not simply as a means for communicating, but as a major new theatre for capital accumulation and transference. But civil society is also an increasingly energetic presence in cyberspace. The greater the diversity of cultures and groups, the better for this larger political and civic inhabitation of the Internet, and the more effective the resistance to the risk that the corporate world might set the standards. The Internet has emerged as a powerful medium for non-elites to communicate, support each other's struggles and create the equivalent of insider groups on levels ranging from the local to the global. We are seeing the formation of a whole new world of transnational political and civic projects.

Saskia Sassen is a professor at The University of Chicago. Her latest book is Globalisation and Its Discontents, New York: New Press, 1998 Xsassen AT columbia.eduX

Scott AikenInevitably, people will get riled up in a lot of places, and when they do they'll use the Net to knit together highly distributed media systems to get their way. The power of these will cause all kinds of havoc, like town gossip in little villages scaled up to the whole world. This will keep politicians and media on their toes. Dr. G.S. AikensFaculty of Social and Political Sciences, Cambridge University Xgsa1001 AT cus.cam.ac.ukX

Tony Blair in AbsentiaI passionately believe in the equal worth of every individual. I hate the squalor and idleness that shame our rich societies. I am committed to breaking down barriers of class and sex and race. I want to curb unaccountable power. I want more people to get on. And I am certain that we will only help more people get on if we create a strong and just society, designed to empower the many and not the few. These are old values. But they need to be realised in new circumstances.

[...]

In a global economy, 1 trillion dollars a day is traded on international currency markets. The world's multinationals have budgets bigger than most governments. Exchange controls and capital controls have been abolished. So we need to rethink the role of macro-economic policy.

Changes in computing and communications tech-nology are transforming our culture and society. 40 per cent of families with children have access to up-to-date PCs. Ten years ago we had three TV channels; soon we will have 200. We can produce more than ever before, with less labour than ever before.

[...]

The ideological changes [of New Labour] have been significant too. We recognise the global economy. We know the state can become a vested interest. We want to see successful, profit-making companies. We want to decentralise power. We have taken seriously the challenge of environmental concerns.

Taken from a speech given by Tony Blair at the Nexus/Guardian conference, 1/3/97[www

.netnexus.org]

Ziauddin SardarThe taming of cyberspace is following the well-established colonial patterns of the conquest of the non-Western societies and cultures by the imperial West. Colonialism involved much more than physical occupation of the non-West; it was equally concerned with cultural and mental possession of non-Western people and representation of all things non-Western as innately inferior. The net is taking the colonial project to its next dimension. Like colonialism, it is rapidly becoming an instrument for creating new markets for Western consumer and cultural products in the non-West; for Western control and management of the non-West; and for marginalising non-Western cultures and world views by propagating and imposing Western ideologies and cultures on the non-West. The non-Western response to the new imperialism of the Net will be similar to its response to colonialism: resistance by all necessary means. Thus, from the Asian and African viewpoints, the politics of the Net is essentially the politics of resistance. And the net itself will become an instrument of this resistance. It will be used both to create a new non-Western cyberculture of resistance as well as subverted to undermine the domination and control of the West. We are thus heading towards covert and open cyberwars.

Ziauddin Sardar is editor of Futures, the monthly journal of forecasting, planning and policy.XZSardar AT compuserve.comX

Dave CarterIf we accept that a key aim of politics is empowering people to take more control over their own lives, then the role of the Net must be to provide new and imaginative tools to aid that process. Here in Manchester we believe that local government, together with the voluntary sector and the wider labour movement, has a crucial role to play in facilitating this. There is a lot of hype about the idea of 'electronic town halls' - we believe that electronic democracy has to be built from the bottom up. We are concentrating on supporting local initiatives, such as the Electronic Village Halls, which provide training and facilities to support people's access to the Net; the Labour Telematics Centre, which brings together the Workers' Educational Association and trade unions to ensure that the organisational abilities of workers, both current and future, are strengthened through innovative uses of the Net; the Manchester Community Information Network, which is helping to develop on-line community networks at a local level.

The creation of the virtual Town Hall may still be some way off, but if it builds on the strengths of these local initiatives then it can provide an exciting way of reinvigorating local democracy at all levels.

Dave Carter, Manchester City Council.Xdave.carter AT manchester.gov.ukX

Diana McCartyFaces is a closed mailing list for women in media art. It was established in early '97 to provide a space where women could get information about one another, as well as information which we didn't get through the established channels of media art. And it is really working! There is a community being built up; women are developing projects and friendships over the list. The most amazing thing is to see what a difference it makes at media arts events. There are all these groups of totally different women who already have a sense of knowing each other, and they are talking. This might not sound like much, but it does make a difference. Compared to their obscurity within new media culture a few years ago, women are really gaining a presence. The list is still fairly small, just about 160 subscribers, but there is an amazing amount of diversity within that number.

Diana McCarty and Kathy Rae Huffman are the listowners of Faces which they co-moderate and of which the list-administrator is Valentina Djordevic. Subscribe by mailing them at Xkathy AT vgtv.comX,Xdiana AT mrf.huX or Xvalid AT sero.orgX . Faces is hosted by the Cybergrrls Webstation

Korinna PatelisWhat does the Net mean for politics in Western Europe? It means the perfect tech-nological determinist alibi for the revival of a series of bankrupt, clichéd versions of neo-liberalism: freedom of speech hysteria, an unfounded faith in individual sovereignty, the market, globalisation, free-trade. It also means the exploitation of the digital moment by market determinists who naturalise the market by presenting the Net and the market as strikingly similar; as two essentially dynamic, fast and uncontrollable bodies which should be left untampered with to function properly. Presenting the market as fair legitimises private property as the mechanism for the allocation of resources on-line. It produces an unproblematised disenchant-ment with existing politics (or any mediation) which comes to life only to give way to a rather naive credo - a direct democracy-based libertarian version of what the world should be like.

The simplicity of the political hype leads to a further reduction of politics to crude political dichotomies: state/market, socialism/capitalism, E.U. pro-tectionism/W.T.O. liberalisation, culture/commerce, totalitarian-ism/freedom, "Eastern" auth-oritarian governments/the U.S.

Old clichés, new populist clothes!

Korinna PatelisXcop02kp AT gold.ac.ukX

Ross AndersonFor me the significance is that I can do scientific work in collaboration with people in Austria, Norway or Oregon almost as easily as with people down the corridor. I have co-authored papers with people I haven't even met. So the value to me is clear; the value to the general public is less so.

Ross Anderson is head of the Cambridge Computer Lab and chair of the Foundation forInformation Policy Research Xrja14 AT cam.ac.ukX

X t;-b interviewed by h: k Tony Benn & Hari Kunzru

If you listen to the New Labour version of history, than Tony Benn is a dinosaur. Unrepentantly socialist, a believer in a planned economy, he is also a politician in the old style, a debater, a public speaker, scornful of soundbite politics and slick presentation. All this indeed sets him at odds with current political culture. Cool Britannia politics is strangely silent about unionism, the rights of the individual, the defence of the weak against the strong.

But Benn is unconcerned about fashion. He knows he will have the last laugh on the Paul Smith-clad spin doctors who dismiss him as a relic. Why? Because the wily 73 year old politician who

So how do other politicians react when Benn starts videoing their meetings? "Ten years ago people would say put away your camera, and five years ago switch off that tape recorder but now people are so used to it that nobody ever minds at all." Eddie George, Governor of the Bank of England, was a recent victim, and apparently entirely relaxed about being filmed by this arch anti-capitalist. I say I find this willingness surprising, in a time when 'plausible deniability' has become the order of the day, and Benn gives another indicator of why he is so refreshingly out of step with the times. "Oh, I've never believed in being on or off the record. I think if you're elected you're on the record. I'm in favour of saying the same thing to everybody, privately or publicly." When did you last hear a British politician say that?

Benn is also a great advocate of the democratising possibilities of new technologies. In a media age, cheap near-broadcast-quality video equipment is a powerful political tool when put into the hands of the people. As is the internet. "Take the Liverpool dockers," he explains, gesturing with the trademark pipe so that it sprays strands of smouldering tobacco across his study. "They were on strike for over two years, and they went onto the internet. When I went up to visit there were American dockers, Canadian dockers. A Swiss journalist I know heard about the strike from trade unionists in Bombay. They got money and support from all over the world. There is no question about it. Politically, the internet is enormously important."

Many politicians, on left and right, would say as much. Whether your cause is trade unionism or supporting fox hunting, the net is a powerful tool for activism. But Benn is one of the few politicians prepared to defend the real public against the tabloid vision of public concerns. As Blair and Hague scramble to condemn paedophiles, Mary Bell, the commercialisation of Diana's memory and anything else they think will score high on the clap-o-meter, Benn is more concerned with the public's right to express itself, instead of having politicians do it for them. "I think the establishment are very very frightened of grassroots communication. All the talk about drugs and pornography on the internet is really government frightened of people using it for that purpose."

In common with many of the net's most ardent advocates, Benn believes many-to-many media may provide a credible alternative to our increasingly debased mass media culture. "The number of websites, the daily communication that goes on is on a huge scale. And I think that will undercut the Murdochs and the CNNs in time. At the moment it goes through to opinion formers, but in time it will spread through the local networks."

This confidence will not be shared by everybody. But Benn seems sure that the march of history will work in favour of technological democratisation. He constantly places the current technology boom in broader historical context, the tell-tale habit of an old-school Marxist. For Benn, the Tolpuddle martyrs, the CND campaigns of the fifties, even Henry the Eighth, are all useful in understanding the questions we face now. "Control of communications is the key," he says, explaining the Henry connection. "He nationalised the church because he wanted a priest in every pulpit saying God wanted you to do what the King wanted you to do. Charles II nationalised the Post Office in 1660 so he could open everyone's letters. The Tories nationalised the BBC in 1922 because they wanted a pundit on every channel saying there was no alternative to what the government wanted you to do. If you can break through that control and communicate directly, you are really destabilising the power structure."

Benn links freedom of speech on the net with the old issue of press freedom, and sees public access to encryption and a culture of anti-censorship as central to democracy. One wonders whether this is a definition of democracy which would be popular in Number Ten. When I mention that it is difficult to raise public awareness about the political issues raised by new technologies, Benn looks at me as if I have just complained because daddy won't buy me a new mountain bike. "It's like everything else," he tells me sternly "You have to campaign for it. How did men get the vote? They campaigned. How did women get the vote? They chained themselves to the railings and went to prison. How did apartheid end? It wasn't because they had a spin doctor but because the Africans wouldn't accept exclusion from the democratic process. Life is a struggle between the rich and the poor, the government and the people, and it's always going to be like that."

So I ought to get up off my arse, then? Yes, Benn thinks this would be a good idea, not least because the stakes are getting higher all the time. The struggle between rich and poor is "intensifying, because of the amount of technical power at our disposal. If I kill you with a knife that's one murder. If I have an atom bomb I can kill ninety-thousand people. The moral issue is the same. Information rich and information poor are a new classification of rich and poor, because some people have access to surveillance, information gathering, research workers and equipment, and other people have nothing. If you keep people ignorant they're more likely to do what they're told."

Listening to Benn's fiery rhetoric, one can't help but feel uncomfortably aware of the vacuity and bland opportunism that passes for politics in Britain in 1998. Whether or not one agrees with his economic prescriptions, the Grand Old Man of the Left is full of salutary reminders of what we stand to lose if we become lazy about democracy. He is also far more in touch with technology issues than the smiling soundbiters who sell 'a laptop for every schoolchild' or 'a safe internet' as the prescription for all technological ills. "I think that trying to retain self-confidence in the face of a rising tide of technology that gives power to somebody but not necessarily to you, is a very interesting political question." Indeed. Perhaps it is the key question of our times. And the answer? For Tony Benn it is, predictably, an old one. "Organise! Do it yourself! Democracy is not what somebody does to you when you vote for them every five years, it's what you do where you live and work - and that's why it's so frightening to the people in power."

XIV The Nineteenth Century and the future of technocultures in India by Ravi Sundaram

On May 11, 1998, a series of nuclear tests banished the 21st century from India and reinstated the visions of 19th century modernity with full force. In a sense the 19th century has always been there, but in the realm of high techno-culture in India, a moment of (silicon-driven) rupture was being posited over the past few years, confidently and not without a certain arrogance. This is somewhat ironic, since the idea of rupture was in itself a supremely 19th century concept, albeit shot through with a Jacobin imaginary.

On May 11, 1998, the Indian government, now controlled by the far-right Hindu nationalists, announced that it had carried out a series of nuclear explosions in the desert state of Rajasthan. The announcement was bland and technical, yet the effects were immediate. Every faction of the political class rallied around the government ('how can we criticise the achievements of Indian science?' as a communist leader put it...), and the middle classes were jubilant. The techno-nationalist moment had arrived. Indian scientists had shown the world. And what could be more politically correct than the fact that the head of the scientists team which carried out the blasts was a Muslim, who was well versed in Hindu scriptures...?

What is crucial about these blasts is that they brought techno-politics suddenly and brutally into the public sphere. The blasts have also ensured a certain re-arrangement of technological time and modernity in South Asia. In the first place, the matching blasts by Pakistan have completed the process begun by India and put both countries in a state of nuclear terror. Despite the consistent nuclear hypocrisy of the metropolitan powers on proliferation (the argument being that Third World countries are less 'responsible' than the West with nuclear weapons), the sub-continent is faced with the real possibility of mutual annihilation. Also for the first time in the sub-continent, there is now a small but vigorous peace movement, which contains many dissident scientists from the state-sponsored research institutes.

What is particularly interesting is the acceleration of 19th century technological discourses after the nuclear blasts. Nuclear politics, as instruments of technological terror used by national states, is a typical 19th century practice. I refer to the '19th century' here not in terms of formal time but as an imaginative embodiment of a particular form of modernity. In this sense the 19th century could be said to have 'begun' in 1789 and 'ended' in 1989. What is crucial about this form of modernity is the magnification of the national sovereign state as a site of power and violence and the use of older forms of communicative speed - telegraph, railway, the automobile. The 19th century also privileged a form of 'public' politics - hence 1848, 1917, 1968, 1989.

Net culture in the west, by and large, has generally been quite contemptuous of 19th century politics, in fact of most public politics in general. This is particularly true of the US net scene, which is almost self-contained in its imagination. Given that 'Europe' has little or no cultural presence in the electronic geographies of the Third World, India included, it is the US example that is particularly attractive to the emerging techno-elites.

In India these elites are clustered around the large software industry, and technocratic sections of the state. Here, the US discourses on techno-culture, with their anti-statism and libertarian rhetoric, hold a particular appeal to the new techno-elites. By the 1990s elite discourses on technology would typically rail against the now troubled state institutions and bureaucracy and call for a new economic model in alliance with metropolitan capital. Borrowing from the futuristic rhetoric of the Western elite, this group had been trying, often unsuccessfully, to make its agenda public.

I have argued elsewhere that this transition was part of a general crisis of Indian nationalism and a transition to a new, still amorphous urban culture. At the same time the upper-caste elite's investment in the city itself has been ambivalent, favouring as it does the safety of a new suburbia and the emerging 'techno-cities' in Bangalore and Hydrebad.

Many years ago, Herbert Marcuse offered a radical re-reading of the Prometheus myth. The culture of modernity, said Marcuse, has been typically Promethean, with its investment in progress and accelerated development. Prometheus's theft of fire from the gods now became a call to subjugate nature, thus suppressing the more radical Dionysian elements in the modern. In the case of post-Independence India, Promethean modernity took the shape of the ideology of development.

Here, a technocratic elite of modernisers would organise society towards the future. 'Society' was seen as a tabula rasa to be remapped with a scientific nationalist vision. Large scientific institutes were set up with state support; technical universities (the Indian Institutes of Technology) were established with US help. This technological space was of course typically 19th century in its self-imagination. It was composed of a westernised, upper-caste elite; the vast scientific institutions were governed by a formal bureaucratic rationality and closely linked to power.

Technological space was also monumental - not unlike the Soviet experience. The monument (Nehru called them 'temples of modernity') could be a steel mill, a dam or a power plant. Along with the technological monument came 'the Secret'. The Secret was the idea that technological knowledge was the monopoly of the national state; all transgressions would be severely punished. The idea of the Secret was once again a 19th century concept. As such the state's monopoly over Secrecy was backed up less by simple terror - as under Stalinism - than through a vast corpus of laws, mostly inherited from British colonialism.

The dream of Prometheus failed. By the 1970s the ideology of development was compromised by severe economic crisis and the rise of social movements. Since the beginning of the 1980s there has been a slow transition from the old regime of national development towards the loosening of controls vis-a-vis transnational capital. Techno-Politics also changed. From a state monopoly of technological knowledge there has been a secular movement towards the private sector in terms of visible production of certain technological goods. The key commodity here has been the rise of the software industry.

The cultural politics of software in a Third World country like India pose interesting problems. Let us look at facts. Software growth has been phenomenal, with exports running into many billions of dollars and projected to grow even further. But the vast majority of the software industry is geared towards exports, fulfilling turnkey needs of transnational capital. Only a small fraction of it is actually sold in India, and there is very little software in the local languages. In terms of a global commodity chain, software plays the role that Third World textiles used to play a few years ago, occupying the lower end of a commodity chain that begins in the West.

But imaginatively software represents a certain form of knowledge creation, which particularly endears it to the upper-caste elites. It is a form of knowledge which, along with the Net, allows the elite to emancipate itself from the everyday (now seen to be contaminated by the politics of uncertainty), and the limits of territory.

Here lies the problem of counter-cultural techno-politics in India. There is simply no counter-culture which parallels those of European and US net-space, and there probably will not be for a long time. In fact, in India itself net culture remains an elite preserve and part of an urban consumption regime hegemonised by the far right.

Since the rise of an urban consumption culture in the 1980s, it has been the Hindu nationalist right that has been the most active in India's towns, pushing a technologically savvy political culture that has spoken to the new tastes of middle class urban life. The Hindu right pioneered the use of audio cassettes in the 1980s to spread hate speeches, they also used large screen TV projections to show propaganda films that called for campaigns against the Muslim minority. It was the right which opened one of India's first websites, and which today operates a large number of them for the Indian diaspora. The Hindu right combined a peculiar mixture of a 19th century state authoritarianism with a remapped techno-politics drawn from post-sovereign cultures. In a sense this is 'hybridity' Indian style, a dangerous mixture of techno-cultural innovation and authoritarian politics. It is precisely this combination that endears the right to the vast majority of the technical community, as well as to young upper-caste programmers.

In fact, the government commission's recent release of a blueprint for information technology is revealing. The commission consisted of various elite technocrats committed to a futuristic ideology in alliance with transnational capital. What the document does is remarkable in ideological terms, for it blends futurism with the Hindu right's nationalist goals. The sections of the technological elite that had complained about state dominance in technology are now firmly part of the Hindi right's project. The 21st century imagination of the futurists blends seamlessly with the politics of a state committed to the 19th century politics of nuclear terror. It is a resolution of multiple times that philosophers would have marvelled at.

To be sure, this picture does not exhaust the entire gamut of newly emerging urban cultures in India today. There is a different 'Indian' techno-culture which is, again, a typical mixture of older forms of mechanical reproduction and new innovations. Thus there is a vast music market (based on inexpensive audio cassettes) which operates through a mixture of legality and piracy. The music scene is one which lives off the culture of popular cinema, with a radical mixing of music styles and electronic innovation quite unlike those of the past. A lot of this new space has yet to become part of elite net culture in India itself, although it has a lively presence in the diaspora. But the BBSs in India (which to this day remain illegal) have a vast invisible presence in urban culture and provide a space to those who cannot afford the net. BBS culture remains varied and dynamic, with a deeper investment in what the writer Michel de Certeau has called the 'millennial ruses' of the popular and the everyday.

The landscape of modernity in India is therefore a mixture of the 19th century's techno-politics of death and nuclear terror, an urban consumption regime deeply implicated in right-wing politics, as well as 'post-nationalist' techno-discourses like the software culture and elite net culture. At the margins, yet not entirely immune to the above scenario, is the new music scene and the illegal BBS culture. Fluidity and continuity, chaos and homogeneity, violence and pleasure are all part of this landscape.

Here the problem of compatability with Western net cultures, even those outside the mainstream, can be posed. In the West there is a large and varied net community, with its own subcultures and practices. I cannot see a similar situation emerging in India ( or anywhere else in the Third World) for some time to come. The dialogue on techno-cultures between the 'West' and the Third World has of course not even begun, in contrast to the relatively varied debates on post-colonial imaginaries in literature and popular media like cinema. In the case of techno-culture the problems are deeper.

It seems to me that simple appeals to greater access in the Third World evade the crucial issue. The old multicultural model, crafted to integrate racial and sexual minorities in the West, cannot be applied here. Part of the problem is also the narrow basis of the avant-garde in the Net which has a post-1968 aversion to public politics. Some of the soi-disant avant-garde practices in the Net are grounded in a bizarre self-referentiality, which is quite puzzling to a critical Third World observer.

A new politics/practice of translation between the West and the electronic periphery is both necessary and possible. At a basic level an appreciation of the complex nature of techno-cultures in the Third World is needed. In India, the greatest investment in electronic modernity has in fact come from the Hindu right, which has combined this with an authoritarian and dangerous nuclear politics. In the realm of popular techno-cultures, a rich and often pirate music culture has grown on the frontiers of the film industry. Here, too, the models of 'hybridity' and 'multiculturalism' that have emerged out of western debates make little sense.

Ravi Sundaram is a Fellow of the Centre for the Study of Developing Societies, Delhi, India.Xrsundar AT del2.vsnl.net.inX

XX Pancapitalism and Panaceas: Manuel Castells in London by Martin Harris

Manuel Castells, widely acknowledged to be the world's foremost authority on the emergent 'network society', made a series of high profile appearances in London this summer to launch the 21st century Era think tank. Castells argues that global capitalism, dominated by information networks and knowledge intensive services, has emerged as the most productive system the world has ever known. It has also produced steep rises in inequality, crime and exclusion. Megacities like New York, Mexico City or Jakarta are connected to global networks in ways which disconnect the local poor from the sources of wealth. Whole countries and regions may fall into what Castells calls 'informational black holes'. Those who cannot learn the information-based skills will also be excluded. The lesson for Britain and Europe is that economic survival will depend on finding new and more flexible ways of using technology within the new world order.

But should information technology really be seen as the 'last, best chance' for the industrialised world? And how far should we buy into an image of societies dominated by networks? Getting serious answers to these questions is made more difficult by the political context in which Castells is speaking and in which his work is being read here in Britain. Politicians and policy makers need quick solutions to the problem of creating jobs, particularly in those regions which have become increasingly marginalised in the digital age. They tend to view information technology as an unconditional good, and as a panacea for most species of postindustrial blight. The new networked organisations are confidently expected to create knowledge intensive jobs, devolving power and eliminating bureaucracy. The problem here is that Castells himself has shown that there is no simple relation between information technology and job growth in service economies, which are now increasingly polarised between low end and high end occupations. The US Bureau of Labour Statistics forecasts a rise of 368,000 systems analysts and computer scientists in the years up to 2005. US statistics also show jobs being created in various other professional services, but this has to be balanced against the rises in low skill, low status jobs like retail sales (up 887,000), drivers (up 617,000), cleaners, (up 555,000), food counter staff (up 552,000) and waiters (up 449,000).

Corporations like Microsoft have grown spectacularly, but the tendency, even in high tech sectors, has been for large corporations to go for growth through merger and acquisition. These are often justified as guarantees of funding for research and development. But mergers also allow markets to be dominated by a smaller number of corporate players who have created global economies of scale - hence the phenomenon of 'jobless growth'. Under these conditions, the real issue is not knowledge but control, and we should view with scepticism the claim that information networks will lead to a more liberated future of work, or a fundamental departure from the existing ways of industrial capitalism. Downsizing and job insecurity have received substantial media coverage - the enhanced forms of employee control imposed on those who remain in the reengineered corporation of the 1990s has been less well reported. The problem for the network society thesis is that this 'control revolution', (much of it supported by I.T.) is happening in many of the knowledge based sectors (such as banking, finance, or pharmaceuticals), where new and more devolved forms of organisation are supposed to emerge. It is responsibility, rather than power, which has been devolved in the corporation of the 1990s.

New forms of interactivity may nevertheless have the long term potential to change the forms of community, contract and association that have been with us for centuries. Here, a careful reading of Castells reveals a much more politicised view of the network society than was suggested by his broad brush statements about IT and job creation. The distinction between work and home is breaking down - and people may enjoy managing time and space more flexibly. However, in a competitive work culture career advancement and promotion are closely related to personal visibility at the office. Working remotely carries the risk of isolation, particularly for women who tend to adopt a greater burden of housework when they work from home. One of the paradoxes of virtual working is that it is those who are already well established in conventional jobs and professional networks who may be best placed to benefit from reduced commuting times and flexible working hours.

Information networks do change the organisation of work - but it has long been recognised that technological change at work is an inherently messy 'negotiated' process which has little to do with the smooth functioning of networks themselves. The key issue is not, in fact, 'information' but rather the choices which surround new forms of contract and control. The underlying point is that 'technology' needs to be understood not as a thing 'out there' but as a social process which is inseparable from political choices and interests. The network metaphor derives from the idea that the nation state is being left behind by the transglobal flow of information. But the state has hardly been absent from the technological developments which are having such a profound effect on society. Castells himself has questioned the more naive claims of the globalisation thesis, and he recognises that the state was a key player in the technological advances made in the 1980s. This is apparent in the industrial success stories of Japan and Korea, and in the development of the Internet itself. Castells is optimistic about the future of the Internet - but once again a close reading of his work produces a more political view of the interactive society than was suggested by what transpired at his public lectures.

Castells has argued that "every cultural expression, from the worst to the best, from the most elitist to the most popular, will be represented in the new digital universe". At present the multimedia industries are dominated by business interests whose prime concern is with entertainments like video on demand, digital theme parks and interactive games. As with other forms of digitised capitalism, industrial scale is paramount, particularly in distribution. Entertainment is the fastest growing of all US sectors, with a turnover of $350 BN per annum. Castells predicts that the information society will divide, in the short term, into two essentially distinct populations - the 'interacting', who will enjoy the benefits of genuinely new forms of communication, and the 'interacted' who will be provided with a much more restricted diet of prepackaged choices. But there is nothing inevitable about any of this. One survey showed that 35% of TV viewers were willing to pay for distance learning on the Net, while only 19% were willing to pay for video on demand. Interactive networks could support new forms of public space, shaped by a wide range of stakeholders and not just by corporate players fixated by consumer 'choice'. The starting points for the debate should be the values and contested meanings which are represented in cyberspace. EC policy makers do, of course, have a large political stake in encouraging the multimedia industries to create jobs, but this is not in itself incompatible with Castells's thinking on electronic public spaces.

Manuel Castells has mapped out the transformations in work and culture which are redefining the shape of industrial society. Corporations and politicians, recognising the extent of this transformation, realise that they have something to learn from his investigations. The danger for the rest of us is that the more challenging parts of his message may be lost in the clamour for immediate solutions to the economic and social problems thrown up by information capitalism.

Martin HarrisXmartin.harris AT brunel.ac.ukX

XXIII Eyes, ears, mouths and media: Cyberns cross the borders of internal exile by Branka Davic

In 1991 war started in the territory of Ex-Yugoslavia. Yugoslav society went into isolation. This process was slow, but it still appears definitive. Since May 1992 Yugoslavia, or ex-Yugoslavia, or Serbia and Monte Negro, or whatever people call this country nowadays, was officially excluded from the international community. This meant that all legal and financial transactions were prohibited and that no goods could be exported or imported without approval from the international community. Travel was also prohibited in very sophisticated ways: a visa is required for almost all countries in the world, except three neighbouring ones - Hungary, Romania and Bulgaria. For the rest of the world, the process of getting a visa is really painful. For those who aren't aware of the situation which preceded this one, the fact that Yugoslav citizens were once able to travel without a visa to almost any country in the world might explain their current feelings of isolation better. And yet, life continues, also for those who disagree with war and destruction. But what has happened to them?

The first big demonstrations took place in March 1991. More than one hundred thousand people took to the streets of Belgrade. The demonstrations ended with tanks, the police force and the army occupying the streets - a complete blockade of Belgrade. Lots of people were beaten, arrested and humiliated but one couldn't hear anything about it on the television news whereas 'official' (propaganda) reports spoke about hooligans attacking the police. This was when I first heard about young people distributing reports about what was happening in Belgrade during the demonstrations; it was when I first heard about people reporting what they saw just by chatting and leafleting. Unfortunately, the net of people doing this was small, and many of them still had no idea about computers or computer networks. But it created a picture of possible ways of communicating.

The stark choice faced by those who were opposed to the 'official' version of politics was of real or inner - geographical or psychological - emigration. But what does this mean exactly? It means that since 1991 several hundreds of thousands of educated people in their early twenties to late forties left the country, alone or with their families. I was among them. For different reasons, those who stayed or later returned went for inner emigration. Inner emigration means self-isolation from official politics, from the official cultural scene, possibly even from the public in general. It means self-isolation in aid of a protection of self-integrity and mental health.

The cultural scene, on the other hand, was 'occupied' by turbo-folk stars and their promoters in the official media. Another thing one could see everywhere was TV 'prophets' predicting a bright and happy future, economic prosperity - in a word: eternal happiness waiting just around the corner. Plus, of course, the endless political debates and war reports which featured in numerous, hourly TV special news-bulletins.

Almost the entire culture scene of the 1980s disappeared from view. Artists and cultural workers were working silently, almost illegally, exhibiting in private flats, looking for alternative ways to express what they thought and felt. Only very few were organising exhibitions, screenings or seminars. One of these was Cinema Rex in Belgrade, as well as a circle of rebellious journalists and critics connected with Radio B-92. Such a circle existed in almost every town or city, but the rest of the country knew hardly anything about what was going on no more than 100 km away. Lack of magazines, newspapers and radio stations which would cover and discuss events was a bad thing, but even worse was the lack of information distribution in general. By far the best way to announce something was through rumours, by the spoken word, like in ancient times.

In 1996, my family bought its first computer. It was a first step. We bought a second hand modem almost at the same time, which gave us the opportunity to be connected with the university net. At the beginning, I found that only very few of my friends were connected too. The university net only had a small capacity; each time you did succeed in getting through access was limited to twenty minutes. But it was something. The number of users rose rapidly in 1997. That was the year that the first commercial provider opened its offices in Novi Sad. The connection got better, but it was also more expensive, as was the equipment. Feeling constantly isolated I came upon the idea of organising a small mailing list in the Serbian language which could help to spread information. I started with eighteen subscribers in May 1997, running it from my personal account without any special server or mailing list software. It still runs that way. Now, the cyberns mailing list has over sixty subscribers in seven countries. It is small but sometimes very useful. Especially because of the fruitful cooperation with the Syndicate mailing list.

Somehow that was not enough though, and the feeling that something else had to be done was constantly present. A few months later, after many conversations with people from different fields, a group of people started to think about a cyberns media lab initiative. The name was very pretentious, because we had no physical space or donations of any kind, but we had our own private equipment and a lot of ideas. Right now the cyberns media lab initiative (because it still is more of an initiative than a 'real' lab) directly involves ten of us (performers, video artists, painters, architects and programmers). We realised immediately that we didn't have enough money to think about big events. But small things can always be done. The first thing we started with was a lecture by Geert Lovink at an alternative space called Fabrika, which we organised in May 1997. Two weeks later, we hosted a visit from Stephen Kovatcs, the director of OSTranenie, who gave a lecture at the Art Academy. During that year, our members also participated in the Beauty and the East conference in Ljubljana, the DokumentaX Hybrid Workspace/Deep Europe group in Kassel and Crossing Over in Sofia and OSTranenie '97 in Dessau.

1998 has been marked by more activities. Cyberns media members helped make many projects possible, including performances by the group BAZA, Larisa Blazic's web pages and video screenings of Aleksandar Davic's works. They participated in, and/or organised many different workshops such as the Crossing Over Video Workshop in Novi Sad (July 1-15). This August three cyberns media lab members will present their latest production based on communication, in Manchester, during Revolting. Aleksandar will present a selection of Yugoslav video production and a video about Crossing Over will be presented by the project director. Until the end of the year we hope to succeed finishing work on our first CD-Rom, Collective Memories.

That's how cyberns media lab works: supporting our members in their individual work, providing technical help and knowledge, organising events and lectures. We still do not have any work space or financial support. What we have is circle of friends and supporters in this country and abroad who are giving us feedback and confirming that we aren't working in vain. Big events require big money. But small things can always be done. Isn't that the truth?

CYBERNS MEDIA LAB is situated in Novi Sad, the second biggest city in Serbia, with about 400.000 citizens and its own university and football club.

Branka Davic Xspiridon AT EUnet.yuX

Open Source Development

By Gilberto Câmara, 12 January 2004

Source URL: http://www.metamute.org/editorial/articles/open-source-development

Featured in Mute Vol 1, No. 27 (Winter/Spring 2004) - Buy online £5

http://www.metamute.org/shop/magazine/mute-vol-1-no.-27

The production and development of open source software (OSS) has received substantial attention recently, following the success of projects like Apache, Perl and Linux. But what are the real dynamics of this ‘new’ mode of production? The National Institute for Space Research surveyed the production landscape of GIS OSS looking for answers. Gilberto Câmara, director of Earth Observation, shares the findings and argues for a new conceptual paradigm

The predominant idea of the open source production model is of committed groups of individuals operating in distributed networks, with each programmer working on a small but meaningful module. These programmers are apparently isolated, communicating by means of a central repository and mailing lists. They have individual incentives to participate.1 Some writers have gone so far as to identify in open source software a new mode of organisational structure: ‘commons-based peer production’.2 At the National Institute for Space Research, we have conducted a detailed study of one segment of the software market: geoinformation technology (GI), which includes geographical information systems (GI.S), location-based services, and remote sensing image processing. The survey selected 70 GIS open source projects, mainly using a listing provided by the Freegis.org site. Its findings call into question the idea of open source as it is commonly understood. While carrying out the survey, we considered the following questions: (a) How is OSS developed and produced? (b) Who is actually building Geographic Information OSS products? (c) How can developing countries use OSS to meet their national needs?

Three clear models can be distinguished in OSS development: the post-mature model; the standards-led model and the innovation-led model. The post-mature model is found in strongly consolidated markets, where a proprietary product has gained a large market share. As a product becomes popular, its functionality and conceptual model become so well established that it is difficult for another commercial product to capture market share, even if that product is sold at a lower price. In such cases, there is a strong incentive for newcomers to license their products as open source: for many potential users, the perceived benefits of open source outweigh the cost of switching from the commercial product. One very good example of this is the Open Office productivity suite,3 which provides a Free Software alternative to Microsoft Office. The standards-led model arises when the establishment of a common standard for a product allows others to compete in the marketplace by replicating the standard as open source. Newcomers benefit from the substantial intellectual effort that has already gone into establishing the standard. An example is the SQL database, which has motivated products such as mySQL. Another is the POSIX standard for operating system interfaces, which has reduced switching costs from other UNIX-based environments to Linux.

The innovation-led model occurs when universities, public institutions and corporations produce work that has no direct equivalent in the commercial sector. An example is the University of California’s Postgres database management system.4 After an unsuccessful commercialisation attempt, a private company took over the development of Postgres, added SQL support, named the resulting product PostgreSQL, making it available as open source.

Based on size, geographical distribution and affiliation, our survey has distinguished three categories of OSS development teams:

These results contradict the naïve view of open source software as predominantly developed by committed teams working through peer-cooperation. In fact, only four (6%) of the projects we looked at are based on such a loose network of collaborators; more than half are led by individuals. Corporation-based projects account for 41% of all cases examined. Out of the 29 corporations involved in developing open source GIS, 17 are private companies, eight are government institutions, and only four are universities. Currently, in other words, the academic research community is not significantly concerning itself with direct involvement in long-term open source projects. Maintaining and supporting an open source software project requires considerable resources, beyond the reach of most university groups, added to which there is a conflict between the generation of new research ideas and the need for long-term software maintenance and upgrades. Often, for a research prototype to evolve into an opensource product, another team of developers must take over from the original research team and establish a support and maintenance infrastructure for the product.

The relatively small proportion of innovative projects (19%) in our survey shows that the design of most open source software products is based on the post-mature and standards-led production model, in which the main aim is not directly to innovate, but to lower licensing costs and break commercial monopolies. Perhaps most significantly, the maturity, support and functionality of OSS products differs massively across development scenarios. It seems that the corporate environment is much better suited to long-term software development than the individual-led one. Individuals are constrained by their commitments, and are very rarely able to include full-time support for the software they develop, whereas many corporations rely on earning indirect revenues (e.g., consultancy fees) from their open source products. But the difference between a corporation and a networked team is much smaller in terms of quality and support: a committed team of individuals are able to produce results which are comparable to, or better than, those produced by corporations.

The fact that the direct participation of universities in open source software is limited means that innovative non-commercial projects account for less than 20% of the total; a large proportion (53%) simply aim to provide standardised components for spatial data processing. Two innovative projects developed by a networked team of programmers are GRASS [http://grass.itc.it] and R [http://www.r-project.org/]. Both products have a simple and well-understood conceptual design, and their innovative contribution lies not in their design, but on the analytical functions that scientists develop using these environments.

Many developing nations are currently considering policies to support or enforce the adoption of OSS by public institutions. The arguments in favor of OSS adoption by public institutions include:

Our study has significant consequences for developing nations considering OSS. The evidence we have gathered broadly supports the first two claims – but ‘software availability’ and ‘ease of customisation’ are far more problematic. The most successful open source software tools are infrastructural products, such as operating systems, programming languages and web servers. The huge demand in developing nations for enduser applications, especially in the public sector, seems unlikely to be satisfied by the small number of such applications being produced in OSS. Corporations still dominate the development of open source software, taking place around their own strategic interests, and they are unlikely to furnish the full range of end-user applications needed by developing countries. This suggests that if governments in developing nations aim to profit from the potential benefits of open source, they must intervene and dedicate a substantial amount of public funds to support the establishment and long-term maintenance of open source software projects. The benefits of this strategy could be substantial. In the case of urban cadastral systems, based on a spatial database for middle-sized cities, the typical base cost of a spatial database solution for one city is US$100,000. Should 10 cities adopt as OSS solution in a given year, there would be a saving of US$1 million per year on licensing fees; money that could be well used for financing local development and adaptation. Government strategies for supporting indigenous open source software development and adaptation would also result in a ‘learning-by-doing’ process. Such processes foster innovation in the developed world and would likely do the same for those nations supporting emerging economies.

By looking in detail at open source software development in the areas of geoinformation technology, we can see that the ‘Linux paradigm’ is exceptional. Corporations are the main developers of successful open source products built around their own strategic agenda, and peernetworked teams develop only 6% of the all open source GIS products. This result strongly mitigates claims that open source software development defines a significant new ‘mode of production’. In fact, the vast majority of substantial software design and development is still the product of qualified teams operating at a high level of interaction. Developing software in a decentralised manner requires a modular design that is difficult to achieve for most applications, since few software products can be broken into very small parts without a substantial increase in interaction costs. These results have important consequences for public policy guidance. Governments worldwide who try to benefit from the open source software model by establishing legislation mandating its use could be frustrated by the lack of mature, available public-sector applications. In order to create the software they need, governments will have to establish public-funded projects for open source development and adaptation to local needs. Software, whether open or closed source, is still constrained by the essential requirements of its development process: conceptual design, program granularity, cohesion of the programming team and dissemination strategy. Failure to understand the realities of the open source development model could result in a lost opportunity for the developing world: reducing the critical technological gap between rich and poor nations.

MORE READING

Barton, J., D. Alexander, et al, ‘Integrating Intellectual Property Rights and Development Policy’, London: UK Department for International Development, 2002 Benkler, I. ‘Coase’s Penguin, or, Linux and The Nature of the Firm’, Yale Law Journal 112(Winter 2002-03).Brooks, F.,‘No Silver Bullet: Essence and Accidents of Software Engineering’ IEEE Computer, 1982Câmara, G., R. Souza, et al, ‘TerraLib: Technology in Support of GIS Innovation. II Workshop Brasileiro de Geoinformática’, GeoInfo2000, São Paulo, 2000 Dravis, P., ‘A Survey on Open Source Software’, San Francisco, CA, The Dravis Group, 2002Ghosh, R. A., B. Krieger, et al., ‘Open Source Software in the Public Sector: Policy within the European Union’, Maastricht, International Institute of Infonomics, University of Maastricht, The Netherlands, 2002 Hibbard, W., B. Paul, et al., ‘Interactive Visualization of Earth and Space Science Computations’, Computer 27(7), 1994, pp 65-72 Kogut, B. and A. Metiu, ‘Distributed Knowledge and the Global Organization of Software Development’, Boston, MA, Massachusetts Institute of Technology, 2001 Landes, D. S., The Wealth and Poverty of Nations, New York, W. W. Norton & Co, 1999Pebesma, E. J. and C. G. Wesseling,’Gstat: a program for geostatistical modelling, prediction and simulation’, Computers & Geosciences 24(1), (1998), pp 17-31 Schmidt, K. M. and M. Schnitzer, ‘Public Subsidies for Open Source? Some Economic Policy Issues of the Software Market’, Munich, Seminar for Economic Theory, Ludwig Maximilian University, 2002Stonebraker, M. and L. A. Rowe, ‘The Design of POSTGRES. ACM-SIGMOD’, International Conference on the Management of Data, Washington, D.C., 1986, pp 340-355.Weber, S., ‘The Political Economy of Open Source’, Berkeley, CA, University of California

Gilberto Câmara is Director for Earth Observation at the National Institute for Space Research (INPE), Brazil [http

://www.dpi.inpe.br/gilberto ]

1 Weber, S., 20022 See Benkler, I., ‘Coase's Penguin , or, Linux and The Nature of the Firm’ 3 Download OpenOffice at [http://www.openoffice.org ]4 See Stonebraker, M. and Rowe L.A.,19865 Ghosh, Krieger, et al., 2002

Harvest Time on the Server Farm (Reaping the Net's Body Politic)

By Roy Ascott, Sara Diamond, Geert Lovink and Pauline van Mourik Broekman, 10 September 2000

Source URL: http://www.metamute.org/editorial/articles/harvest-time-server-farm-reaping-nets-body-politic

Featured in Mute Vol 1, No. 18 – I Am The Network

Mute Vol 1, No. 18 – I Am The Network - Buy online £5

http://www.metamute.org/shop/magazine/mute-vol-1-no.-18-%E2%80%93-i-am-network

The 'Internet Revolution' is nearly a decade old. But what type of 'revolution' is it, and what type of revolutionaries are net users? The worlds of digital art and theory have gone round the houses on these questions; Pauline van Mourik Broekman caught up with three of their members - Sara Diamond, Roy Ascott and Geert Lovink - to get an update on the state of conflict

There is no anno domini for Internet time: its converging technologies and overlapping histories do not easily lend themselves to one linear story. 60s guru Marshall McLuhan called communication technologies the extensions of man; e-business bible Wired heralded the Internet as a revolutionary medium; influential sociologist Manuel Castells calls the political landscape it helped create ‘the network society’. But, since the mid 1990s — the time of the Net’s first ‘big bang’ — it was arguments over its social character that vitalised the net community. Fought everywhere from the media to the academy, zines, lists and conferences, these philosophical gang wars seem to have lost their ferocity (perhaps even their raison d’être) now the Net has been normalised. In the public imagination, extropians, hi-tech Buddhists and hackers — all of whom claimed the Net as quintessentially theirs — have been replaced by online shoppers, e-traders and dot com entrepreneurs. At the same time, metaphors from its heyday — of out-of-control hives and other biological systems — continue to permeate popular discourse; the new lexicon can accommodate ‘global brains’ and ‘emergent orders’ very well. So, what is the Net’s body politic, roughly seven years into the Internet ‘revolution’? Can we even speak of ‘bodies’ anymore? Pauline van Mourik Broekman chewed these issues over via email with three members of another Internet-bound tribe, the digital art community: Sara Diamond, artistic director (MVA) of the Banff Centre for the Arts, Roy Ascott, pioneering electronic artist and Geert Lovink, speculative media theorist.

PB: Whatever your point of view, one fact is inescapable — and that’s the degree to which computerisation has penetrated biological research at every level. It’s now impossible to posit a discrete natural or biological realm — if it ever was — with technology mediating every stage of scientific analysis. Despite the fundamental impact on the natural world that IT also causes, are these nevertheless just reiterations of very old questions about nature and nurture, nature and technology and nature and the production of value? Or do you think the emergence of ‘bioinformatics’ is symptomatic of something altogether different, a ‘bioconsciousness’ perhaps? And how would you describe the role the Internet has played in this change?

RA: ‘Biological research’ covers a vast field, with DNA and genetic engineering currently in the media spotlight. But neuroscience and the study of consciousness are equally significant. Indeed that is where the greatest challenge to science lies. We know almost nothing about consciousness, and yet that is finally what defines us as human beings and where the most significant transformations of self can take place.

Now, the Internet can be seen as an extension of mind, just as the book has been. However its structure is inherently non-linear and associative, in ways that are very similar to mental processes. These qualities enable thought to leak through boundaries with the effect of making minds in cyberspace more permeable. A kind of collective intelligence such as Pierre Lévy describes may emerge. Ultimately, this is all a question of language.

PB: Geert, do you also see this uncanny similarity between, and thus easy permeability of, one ‘associative’ system and another? Is the self, your self, being transformed by the Net?

GL: The Self, as I understand this psychoanalytic term, has no relation either to biology or to technology. It is formed within a very broad context of culture and civilisation and, individually, by early socialisation. I read Roy’s work strictly in political terms, as a pre-Enlightenment romantic escape route for tired post-modern people, fed up with the world, the Internet, contemporary art and the young generation in general. These bio-centred metaphors for technology display a great eagerness to excise layers of society, power, race, gender, even business.

For me the Self is a somewhat tragic notion. The damage is already done — by growing up in this world which is, in essence, neither good nor bad. The question should be whether technology will function as armour or enhancement for the Self: will it help us to navigate or isolate ourselves from the world’s complexities?

RA: It would be helpful if Geert were to address some of the first questions. Instead he chooses to make rather sullen assertions/assumptions about me, framed in the old language of sociology. The Internet, contemporary art and the current generation are central to the world view I espouse and fill me with the most exhilarating enthusiasm and optimism. Romantic? Yes! Pre-enlightenment? Well that covers a vast field of ideas and ideologies. Post-enlightenment? For sure. Escape route? Try portal.

In my view, it’s not a matter of either/or (armour/enhancement; navigation/isolation) but both/and. Technology offers the means of (re-)constructing the self and the world. The contemporary artist can contribute to this perspective by showing, amongst other things, what transformations of the world and of oneself are imaginable and conceivable. The value of the Internet in these endeavours should not be underestimated.

GL: I agree with this analysis. I would never underestimate the Internet. It certainly changes our communication abilities, but I see the ‘global brain’ strictly as a metaphor. The same counts for the ‘digital city’ (a project in which I have been involved) and other urban metaphors: they help us to understand the new. But both will eventually become irrelevant and die out.

PB: Roughly seven years into the ‘Internet Revolution’ there seems to be a broad consensus that the Net has essentially become a corporate led and dominated environment — although many question the way this viewpoint romanticises and misconstrues its early history.

GL: The future of the Net is still open, at least on a conceptual level. On the other hand, I do think that a particular part of the Net is rapidly closing down. It is going to get harder for artists etc. to get their content to mass audiences. AOL, Microsoft and all the rest, from Bertelsmann to Murdoch owned media, and all those with major stakes in portals are spreading fear amongst content gatekeepers not to miss the boat. Strong pressure is being exerted by the mainstream to cater to its tastes. At the same time lots of opportunities are opening up on the (meta) level of software, peer-to-peer networks, open source initiatives, streaming media. I don’t condemn the corporate take-over, it is a reality now — one which can be analysed and countered, perhaps even overruled and circumvented.

RA: Agreement again! It was ever so since the early years, always a conceptual challenge; the circumvention of imposed norms, countering corporate strategy, creating new models and metaphors — this has been the name of the game since we first started using networks over twenty years ago.

As a channel, the Internet can link minds, as an instrument it can enhance systems, as an environment iT can accommodate new kinds of community. Let’s start from where we seem to agree: in itself the Net is nothing, can do nothing. But I think it is not too simplistic to suggest that it can be useful to human development from three points of view: as a channel, as an instrument, and as an environment. As a channel it can link minds, as an instrument it can enhance systems, as an environment it can accommodate new kinds of community. The degree to which this affects consciousness, biology and social organisation depends on its relative ubiquity, the subtlety of its interfaces and its information-bearing capacity; and above all perhaps its affordance of interactive and transformative process. After the ‘seven years’ that Pauline mentions the rate of progress is indeed disappointing.

SD: I have to say that I am far from writing off the Internet and the complex of networked communications as "essentially a corporate-led, corporate-dominated environment". There is such tremendous hybridity within the Internet for one thing. Secondly, we are at a moment of confrontation, where there is a struggle for power between groups and organisations over the nature and design of networked communications. Thirdly there is in fact, due to all of these processes, more access to the Internet as a space where struggles over consciousness, what is public, what is individual and what is collective can — and do — occur. Fourth — and this is a critical point — there is a promising collective praxis bridging activism, intervention, pastiche and new artistic forms: "Radical corporations in an era of corporations," as RtMark puts it.

Perhaps my final point in this list is that, with increased access, I see cultural praxis of a sophisticated nature emerging from aboriginal groups on the Internet (there is a major movement towards aboriginal Internet radio, for example). Life on the margin may not necessarily be about being a provocateur of activism or change.

Playing for what is international (rather than global), shared (rather than mass-culturally enforced), local and specific (and often the most resonant) is critical. And as Roy suggests, Internet culture is highly performative, and hence still allows for this play, invention and mobility — all of which are survival needs for the 21st century. In this sense, and without any reference to a utopian project, there is a dispersed potential for new types of identity formation. Groups use the Internet to remap local power — to create bridges within their urban spaces. Through these different pathways people make fundamentally different kinds of meanings from the same knowledge.

This also leads me to language: I am fascinated by the tension between the invention of words, reclamation of words and repurposing of words. The Net has of course been a location for rampant appropriation and invention of this kind. In particular, with a term like ‘community’ which, embraced by the liberal Left as well as by identity politicians in the late l980s and early 1990s, was also a child of American violence in Central America and Vietnam, and enabled the reciprocal gating of spaces to function, ultimately, as a protection against difference.

Looking at the last ten years, there have been a number of preoccupations that have served as lighting rods for Internet development. First and foremost was the emergence of new social formations and communities, while the outlaw imagery of the hacker dominated the Net — the hacker, actually a product of military and scientific culture, was the last of the twentieth century heroes. I see a fundamental irony in the fact that this figure is now becoming a trope of mass culture at the very moment that the World Wide Web is becoming a mass communications and entertainment system. The hacker ensures that discussions remain tainted with either utopian or dystopian visions of the future.

This is where I question allusions Roy’s made in the past to Heidegger, and ideas of the virtual space as a state of transcendent mind, one capable of healing the historical western rupture of time and space. I also question the associated notion of the Web as an expressive state, somehow positioned beyond representation and as experiential, not referential.

Understandably, a related concern of the early part of this decade was the problem of the virtual. How do we make meaning without suture? Is it through free fall? What are the politics of hallucination, loss of balance, ‘navigation’? And how do these align with the self-conscious analysis of language that was such a part of the l980s? In ‘navigating’ — a frictionless passage through deterritorialised space — what are the social and political assumptions about the territory navigated, the identity of the navigators, their responsibility or relationship to the spaces traversed? And why the metaphor of territory in a time when post-colonial discourse was unravelling power relationships and their presence within metaphor.

Cyberculture was the last frontier; the problem of virtual experience connected directly to the final ground of concern, the body. I think this is a discourse we need to remain tightly engaged with, so to speak. The body as present, not transcendent; the body as desirable.

As I have implied, until recently cyber theorists disdained identity politics. This was an ironic stance for members of a subculture created by hackers and academics but also, at least temporarily, chaos theory and fractal mathematics became panaceas for social and economic analysis in a time of crisis — precisely at the moment when other universal and determinist theories such as Marxism and structuralism had hit the dustbin of historical materialism.

The Net is still an environment shared by effective left-wing solidarity machines and the neo-right electoral apparatus. It substitutes for the erasure and incursion into public space and as such offers a rich ground by which to consider the potentials of subcultural subversion. This means the level of conflict on the Internet has in fact intensified.

GL: Yes, it’s unbearable to live without anchors and everyone needs his or her own church — that seems all too human. But, I can also see a flaw in identity-based community work.

Today, the underlying idea is one of self-organisation. No identity but one’s own: cultural apartheid in the original (Dutch/Boer) meaning of ‘separated development’. We can be happy about the identity-building of the Other, even encourage its expression and support it morally and financially, but that’s it. A context of plenty is likely only to increase this drive towards ‘sovereignty within one’s own circle’ — another of these Dutch ideas.

Arts, culture, politics will not be constitutional, but recreational — relegated to the after-hours of society. At the same time, this does mean that we can expect the discussions about operating systems, interfaces and censorship to increase. That’s what people on a global scale will share.

SD: I do agree with this, mostly. After all, look at late night US reality TV and you will see that pleasure resides not only in self help or ‘sovereignty within one’s own circles’ but also very much in the voyeuristic access to the Other. Pain and gain. However, I don’t think that the complete commodification is here yet — rather, there are different forms of commodification: some by big capital, some by little, some by para-economies.

PB: This is very interesting: the tug and pull between representations of identity and difference as either incarcerating or expressive and liberating. This ‘own circle sovereignty’ equates with a honing down of market demographics into ever finer slices. In the offline environment, for example, large-scale publishers have recently started acquiring special interest magazines they previously wouldn’t have touched (e.g. in the Hobbyist category) as vehicles with which to then go online — an environment they clearly see as totally determined by very specific user preferences. If we’re to face the fact that we’re tribal in our cultural consumption, then where are the areas of properly experimental and dialogic commonality?

GL: At some stage, the new media arts system, consisting of festivals, exhibitions, discourses and a tiny, vital social life, may disappear altogether. The current implicit ‘civil war’ between the so-called contemporary arts mafia and the much smaller new media arts scene may soon lose its meaning. We can already see that happening with video.

What is missing now, in my view, is an understanding of the new position of new media arts and its telematic experiments in a wider context. I wonder why Roy is often so preoccupied with the role of the engineer: what makes them such an interesting species? Why does he always leave out the term media and narrow down technology to a hard core science? I think this fascination with the (exclusive) laboratory culture is something of the past — technology has spread now, it’s in the hands of people.

RA: Actually, Geert, I largely agree with you. My take on science comes from the metaphors and models of scientists working at the edge of their discipline or breaking new ground between fields. Scientists as a whole may be more institutionally constrained than we are but the radicals really are wild. In fact, I’m not sure it is so very different in the art world. Our Damien Hirsts and Tracey Emins also come from extremely tight ‘academic’ frameworks working within quite precise parameters of the ‘new’ (mostly à la late sixties and seventies shock/schlock).

My search for ideas takes me into many realms that could be classed as scientific but stand way outside the validated research establishments. That takes me also across cultures and back in time, often in what is usually deemed/damned as esoteric.

SD: This context, or complex of contexts, has been in crisis for a long, long time: festivals, galleries, museums, artist-run centres too. The crisis is one of historical function. Radical museology has most recently been about absorbing its own institutional critique into presentation. The museum becomes a site for the reinterpretation of collections and repositioning of subjectivities. This is happening at the same moment as the blockbuster show, which is a return to the museum of the 19th century and before — as a place of spectacle, mostly about the Other.

Now the spectacle is the spectacle of creativity in a time without time. Some of my cynicism about the emerging institutional presence of video is due to video in the museum being the video of spectacle. Why haven’t museums and galleries become broadcasters, or rather narrow casters, streaming an archived collection? Why do they care so much about their real estate as opposed to their social agency?

PB: Roy, I’m very bemused at your characterisation of Tracey Emin and Damien Hirst as coming out of the tight strictures of an academy.

RA: It’s neo-schlock based in an uncritical reverence for commodities. It’s the déjà vu of tableaux and frozen moments that is so depressing; the imperialism of the object. What could better support the centre right cause? They juggle ‘signs of the New’ while eschewing any kind of real creative innovation.

SD: Why is the Right so able to reach consensus though where critical thinking is not? What should a cultural program look like now — a nice fractured program?

GL: The Right is not in power, I would say. Neither is the Left. These are outdated categories. I know, this statement is a banality but it is really true! So much of the political Will is focused on harnessing the fears and desires of the ever-growing middle class. That’s the Third Way. So we have to understand the secret, unconscious streams. What do they want? A normal, decent life? Less work, more leisure? Basic infrastructure? Public facilities? For me that’s where a fractured program would start. Reinventing the public sphere(s), on the edges of the (nation) state and the (global) market.

RA: Tapping into "the secret, unconscious streams". The artist’s role as always. But ‘reinvention’ could sound rather top-down in conception, whereas presumably we all look to the artist (inter alia) as seeding new culture(s) rather than laying out master plans or blueprints for others. My interest is in the artist contributing to life at the edge of the Net, which in many ways corresponds to the edges of the (nation) state and the (global) market. In my view, this is where a ‘moist’ culture — where dry computational systems converge with wet biology — will likely emerge.

GL: Reinvention is not at all top-down and I am not sure if artists have a higher consciousness and greater sensibility in this field: they’re not such meta humans.

RA: What does it mean to say everyone is an artist? Do we mean in potential or in practice?

In many places the semiology of the public sphere — terminology like ‘public library’, ‘museum’ and ‘town hall’ — needs a makeover. It used to speak for authorised knowledge, classification, boundaries, centralisation. All that is being continuously rethought of course. Public education, too, may need to give way to new forms of accessing and navigating knowledge. As much as privatised education is abhorrent to me, so the massification of public education has been disastrous (in the UK the whole curriculum is prescribed from the centre and endlessly and rigorously enforced, assessed, and subject to accountability to the degree that over 40 per cent of time is taken as sick leave by the desperately stressed out teachers). Net culture shows signs of eliciting new possibilities for education: where it deals more with the navigation of knowledge fields than the ordered ingestion of authorised blocks of knowledge, it could be brought forth and cultivated.

What is truly dangerous is that, in between the failure of public education in the States (for example) and the dithering uncertainties of political will regarding cyberspace, the private operators are moving in with truly crass ideas about commercialising education to remove the ‘burden’ from local authorities. So neither public education (to the extent that it means centralised or uniform) nor commercial education will do. Some visionary pragmatism is needed here.

PB: In the UK, the term ‘creative industries’ is now commonly used in the spheres of media, government, education and economic policy to bracket a previously unrecognised cluster of activities which, although diffuse and not obviously lucrative, taken together contribute significantly to national income. The term is a special favourite of British Culture Secretary Chris Smith and has begrudgingly been adopted by many working in the cultural field. As such, it forms a cornerstone of UK arts funding policy — and education. What role do you think this convergent area, in which what used to be called ‘applied’ and ‘fine’ arts coexist, has for government and politics?

GL: If it exists in the first place, this ‘emerging’ phenomenon of ‘creative industries’ has come into being in spite of government policy, not because of it. Politicians should be ashamed of themselves and shut their mouth. In fact, quite the opposite is true — governments are now fighting over the accumulated symbolic capital of cultural production, claiming it as a result of their homeopathic surgeries in the cultural scenery. This is not only a question of window dressing and appropriating the work of countless cultural workers who have never seen any reward (or have been paid too little for a very long time), but there is also a serious lack of vision and willingness to take risks. Third Way policies are not changing this. Their logic is: you take the risk and we claim the result.

Roy Ascott <roy_ascott AT compuserve.com>Sara Diamond <sara AT banff.org>Geert Lovink <geert AT xs4all.nl>Pauline van Mourik Broekman <pauline AT metamute.com>

Related sitesThe Banff Centre for the Arts [http://www.banff.org], Nettime [http://www.nettime.org] and CAiiA STAR [http://www.caiia-star.net]

Mute in Conversation with Nettime (Pit Schultz) (Digital Publishing Feature)

By Pauline van Mourik Broekman, 10 January 1997

Source URL: http://www.metamute.org/editorial/articles/mute-conversation-nettime-pit-schultz-digital-publishing-feature

Could you tell me something about how nettime was started, and how it has developed since then?

nettime started as a 3-day-meeting in a small theatre in Venice during the Biennale 95. A meeting of Media-activists, theoreticians, artists, journalists from different European countries. (Heath Bunting, Geert Lovink, Diana McCarty, Vuk Cosic, David Garcia, Nils Roeller, Tomasso Tozzi, Paul Garrin, and many more.) We developed the main lines of a Net-critique along the topics of virtual urbanism, globalisation/tribalisation, the life metaphor. Also, it became obvious that it was necessary to define a different cultural (net)politics than the one Wired Magazine represented in Europe. It was a private and intensive event, and in a way, it defined the 'style' that we critique and discuss issues on nettime. Nettime is somehow modelled on the table of the meeting, it was covered with texts, magazines, books, whatever we had to offer the group. It was the start of our 'gift economy' with exchanges of information. Today the list has nearly 300 subscribers, it's growing constantly with around 10 subscribers a week. We do no PR and the list is semiclosed, which means new subscriptions must be approved.

Were you intensely involved with computers?

My first computer was an Atari2600 TV-game, then a ZX81, C64, Amiga1000, I switched to Mac when I began with DTP in the Botschaft group after '90, used Dos/Linux for Internet, and ended up with a DX66 under Win95, mainly to run Eudora, in an Intranet. So these machines document certain phases in my life, but they don't determine them. I also studied computer science for a couple of years, but it was not what I expected, which was a more conceptual approach that reflected the development of software on a much broader, maybe cultural, level.

...and net culture

I was involved with The Thing bbs network from 92-94, the high time of ascii and text based internet like MUDs and MOOs, before the Web. At the same time I was working with the group Botschaft. There were also some exhibitions of low media art, a communication performance in the TV tower in Berlin, meetings, long term projects in the public sphere like an installation with Daniel Pflumm in a subway tunnel, a collaboration with the group 'handshake' which later became Internationale Stadt, or Chaos Computer Club which Botschaft shared office space with. After a Bilwet event we organised, I started to work with Geert Lovink, which was a truly new phase of work.

...as an artist

Yes and no. I got a stipendium and did exhibitions, but always had problems accepting art as a 'closed system', and I have to emphasise here that nettime is a group project, it is not a 'piece of individual art,' but a medium formed by a collective subjectivity, a sum of individuals. I'm moderating it and it has its aesthetic aspects. But you don't have to call me an artist for this.

...before you started the list and how do you think that has affected how nettime was set up? Well, you can call it a continuation of my art practise. But, it functions without naming it art. In '94 I tried to begin with projects on the Web, especially the Orgasmotron Project (a database of recorded brain waves of human Orgasms) which reflected the early euphoric times of 'first contact.' With Botschaft e.V. in 93-'4, we did the Museum fuer Zukunft, a group project and database of future scenarios, ideas, and views, but during these projects it became clear that I needed a deeper understanding of the collaborative, theoretical, and discursive aspects of cyberspace to continue. During this time I also gave up doing installations in defined art spaces. Generally, after a euphoric entry phase I got extremely bored and disappointed with what was and is happening in the art field. My main interest remains what Andreas Broeckman calls 'machinic aesthetics', a field between the social, political, and cultural economy of the so called 'new media'. So I was happy to meet Geert, and through Venice and a list of other meetings, a group of people with shared interests that we're trying to bring together on the nettime list.

It seems that nettime has gravitated more towards net-political and -philosophical discussion than that directly to do with 'art'. What role do you (and Geert Lovink?), as (a) moderator(s), have with regard to that?

Art today, especially media art, is a problematic field. When I listen to music, it may happen that I don't like it, but it comes through the radio. That's how art appears to me. You can switch it off, but there is still a lot of music around. So much about art. With the moderation: it is also a contradictory role. The less the moderator appears the better the channel flows. It is, of course, this power-through-absence thing, but we hope that we handle it carefully and in a responsible way, with the continuous group process in mind. Power flows through networks, and you cannot switch it off. From different sides, Geert and I have an interest in working with the dynamic of the aesthetic contra the political field. There are many fault lines and frontiers. One of them seems to become the art system which still has some kind of Alleinherrschaftsanspruch in the symbolic cultural field. This changes through new media and even if new media will not make the term 'Art' obsolete, there is something about the paradox between media and art or media art that I find deeply problematic. Both have components of totalatarian systems of representation. There is the chance that new media creates channels to redirect the flow of power. That's what nettime is made for. An experimental place for (re)mixes, something I missed for a very long time. Never perfect and always 'in becoming,' but not explicit, not descriptive but performative, and pragmatic.

Both Geert and I have our own reasons to distance ourselves from today's 'art discourse'. You can name nettime a political project in terms of the real effects we try to trigger, in terms of conflicting debates reflecting and criticising economic and social implications of the 'digital revolution'. It is a philosophical channel in terms of describing a certain 'condition', while accessing and applying the traditional knowledge including the 'postmodern' stuff. It is an aesthetic process in many aspects, while developing a collaborative writing space, experimenting with modes and styles of 'computer mediated communication'. Finally, we have the luxury of silence and don't advertise, so we don't need big investments into labels and surface, it gets spread by word of mouth, and the footer 'cultural politics of the nets' can mean many things. It's about clouds. There is this 'field of virtuality or potentiality,' multiple contexts and personas, interests and intensities which, like the social aspect, the time aspect, the knowledge and news aspect, make nettime something which modulates a flow of heterogeneous subjective objects, something with an existential aesthetic of living with nettime, (including the group, events, projects which grow here) a collective and singular info-environment which exists without the need to be named art.

At the discussion at DEAF96, I think you described nettime as a 'dirty' ascii channel; how 'dirty' or unmoderated is it?

Dirtyness is a concept here, especially for the digital realm, which produces its own clean dirtyness, take the sound of digital distortion of a CD compared to analogue distortion of Vinyl. Take all kinds of digital effects imitating the analogue dirtyness, which means in the end, a higher resolution, a recursive, deeper, infinite structure. I used the concept because of its many aspects. It means here to affirm the noise aspect, but only to generate a more complex pattern out of it. It does not mean 'anything goes', or a self-sufficient ethic of productivity. It is slackerish in a way, slows down, speeds up, doesn't care at certain places, just to come back to the ones which are tactically more effective.... there is a whole empirical science behind it, how to bring the nettime ship through dark waters... how to compress and expand, how to follow the lines of noise/pattern instead of absence/presence...

(In fact I pushed the big red button of the moderator mode only once, after a period of technical errors and a following unfocused dialogue.) The phenomenon is, and I think this is not such a rare thing, that a group of people, in a repetitive, communicative environment, begin to filter a field of possible 'communication acts' in a certain way, quasi machinic. You don't have to be professional or especially skilled in the beginning. The production of 'information' along the borderline of noise means to constantly refine a social context, maybe an artificial one, what some call immanent, I mean with rules which are self-evident, and are interdependent in a dynamic way. The list-software sends a kind of basic netiquette to the new users but this affects only some formal factors. One is that we decided to avoid dialogues, without forbidding them. Nettime is not a list of dialogues of quote and requote, but more of a discursive flow of text, of different types, differentialising, contextualising each other. On the net it is called 'collaborative filtering' or earlier, it was 'social filtering'.

Dirtyness means here many things, first of all the absence of purity, you always have mixtures, 'agencements' ... but this becomes too trivially 'postmodern'. The constant commentary, forming a socially defined body of knowledge, and of course, a field where power is generated out of undifferentiated forces, which includes the position of the moderators, or other very active participants, for defining where the scope of the flow tends to go. But actually, anyone can post whatever she likes. This risk, which often leads to a situation of overflow and re-orientation, is also the productive freedom of nettime. Another is the limited set of signs, like the Euro-English or net-pigeon, using English as a non-native speaker or the reduced character set of ascii, or the minimal features of the perl-scripts which run the mailinglist. Finally, for the authors, there is always a multiple aspect of why to write, and for the readers, why to read nettime. You definitely have to filter, I guess nobody, including me, reads every mail from start to finish. The sender has the chance to actively select texts she finds on the net and forward them. The author can pre- or republish texts, send pre-versions, test certain ideas, or sample others. On the material side, there are the Printouts of ZKP, readers which come out in small numbers during conferences. The process of inscription combined with a filtering process functions a bit like a news-ticker, if you want to find a comparison in the publishing world.

Two other pertinent issues that came up at the DEAF discussion were those to do with size and finance. If online journals or lists are akin to creators of community, for example, where discussion can be catalytic due to the small size of the group and many of the contributors also knowing each other 'in real life', does their effectivity decrease beyond a certain size (I think Geert mentioned a couple of hundred). Although nettime is still a 'closed' mailing list, its subscriber base has grown; have you adapted your methodology?

As you can see, nettime is still going well. It seems there is a self-regulation process on the side of the contributors. There is the growth, which is around 10 new subscribers per week, mostly on a word of mouth basis, which leads to a certain social consistency. Then, in the way texts get selected/produced and find their way to the list. The 'group' is circumscribing a network of real life relationships, a network of shared interests, and a network of contextualising documents. This happens in relation to the 'outside', to the 'wideness' of the net, and to the 'deepness' of the local places where people work and live. Every document represents a vector through time in a social context, a discursive environment with many levels of reference, but a relatively concrete and simple surface: ascii-text. The complexity and aesthetics which come out of the simple practical rules of a mailinglist are complex and dynamic enough to not feel the urge to experiment with multithread, hypertextual, multimedia environments, even if we think about certain extensions you find in common with infranet or groupware solutions in the corporate world. It says: never touch a running system. I think the next level will evolve through a certain economic pressure, certain cases where texts reappear somewhere without permission, or other cases where the unwritten norms are subverted by other 'content machines' running on other principles, but sharing similar fields of issues. There is a need to use the chance and experiment with new horizontal networks of producers, to respect the collaborative editorial work of a user community and most of all, to think about financial models in terms of a sustainable quality of discussion, which includes the 'currency' of trust and credibility.

And then regarding finance. This obviously has enormous effects on how things can run. Nettime is a 'no budget' operation; what are the advantages and disadvantages of this and how do you manage to keep going? First I have to say that your question already has certain implications. It may seem natural to put anything you do into an economic model and ask, what do I get for it? what do I pay for it? But it cannot even be said that such an exchange economy runs effectively with money. There is clearly a drive to profit from new media, and, of course, money must be there, for a basic funding, but the goal of nettime is not financial profit. One easily comes to this point with a defensive position, or a dogmatic one, fighting against the all too present, not to say, totalitarian system of a world wide integrated capitalism. Even after Marx, there are social fights, and especially within the new media, you have to face, like in the art world, certain problems, which often mean, make money fast, but bad work; work, but don't get good money. There is a certain kind of luxury today, which is somehow overcoded by 'slackerdom' which is contrary to the work ethic of the yuppie or the political activist. It is a pragmatic level, we do not have to talk about just economics, but we have to develop a working model, a constant fight with risks of exploitation, burn-out, sell-out.

Finally we would have to change nettime from its microeconomical, very basic structure if we would force its commercialisation. To make it clear, especially for mailinglists, but also many other sites with hi content, that it is not at all clear how to finance them for the long term. The time of the hype might be over soon, and then you have to face a shake out of centralisation that we already know from the history of radio and TV. On the other hand, I do not believe in the concept of autonomy. It leads to a sad double life, it might be that you live by state grants, or that you have to do a stupid job during the day. Between, there are many shades of grey, and among them is the possibility of alternative online economies which may once reintroduce less-alienated semiotics into the circulation of capitalism.

You've talked about the importance of editors being sensitive to the exchange economies of the nets; these many economies intertwine, they are not separate are they? Highly commercial and competitive ones share technologies, content and 'participants' (for want of a better word) with ones that are more clearly like the potlach economy you refer to. In practice, what has your experience been of keeping nettime independent within this situation?

These economies intertwine, but not without friction. From the view of the poor, there is the need to disrespect certain economic barriers, for example, licenses and copyright. That's what is happening in many Eastern countries. The new markets are not functioning like they promised to, at least not for all. There are still many chances to use new technology as a tool to reach more independence, but it also gets used in the other way, for a huge 'Darwinist' shakeout. And as one can see with Microsoft, it is not at all the best who survive. So I strongly resist any logic of preaffirming the situation. Potlatch is only a circumscription of a kind of exchange economy which is pretty common, as soon as you have the privilege to do so. I am sure that we will face models which are based on a certain local exclusion of money economy. Any family, community, or friendship is based on such models. Finally, you need the friction, the potential of mixed economies, for a vivid and creative market, at least from what I understand about markets.

This links with one of the ongoing discussions on nettime, the one to do with libertarianism or neoliberalism and social justice. It has, over time, involved posting extensive 'dialogues' on the role of Wired, the demonisation of the State and been presented as an attempt to start generating a productive, European contribution to the development of ideas on techno-cultural political organisation for the future. Is this right and how do you feel it is going?

You can describe it like that. But I don't like to make predictions here. One thing nettime does is critique, this means it reflects and constructs the present. Of course there are strategies, and part of a strategy is that one should not talk too much about it. The important task is not to give up against the homogenising, centralising, and alienating networks of a global integrated capitalism, to use these very ethical-political techniques as 'cultural' ones. To push against what is forced on us as 'economic factors' in favour of a necessary quality.

contact: <pit@contrib.de>, <geert@xs4all.nl>reading:

news://alt.nettime or news://news.thing.at/alt.nettime

Pauline van Mourik Broekman <pauline AT metamute.com>