articles

The Age of Media Autonomy

By Felix Stalder, 4 July 2003

The advent of the GUI browser a decade ago enabled the first serious popularisation of the internet. Its use has grown exponentially since and usually been accompanied by grand claims of democratisation. Yet, anxiety regarding web platforms’ capacity to act as mechanisms of data capture and user control has grown exponentially too, and there is now a proliferation of autonomous media infrastructures attempting to provide alternatives. As part of a collaborative investigation into the rise of such systems, Mute is running a series of articles asking how infrastructure defines, even predetermines, communications practice. It will look into practical examples, and analyse how the technical, social, and economic issues they attempt to tackle create tensions, which can prove as destructive as they can productive.

Here, Felix Stalder, co-moderator of ‘mailing list for net criticism’ Nettime, and partner of free software consultancy Openflows presents his own view on autonomous media infrastructures and the necessity of developing them in the current regime of consolidated mass media

 

BACKGROUND

Over the last decade, the landscape of mass media has been profoundly transformed. There has been a massive consolidation into the hands of less than ten transnational giants (most importantly, AOL Time Warner, Disney, Bertelsmann, Vivendi Universal, Sony, Viacom and News Corporation). Together these companies own all major film studios, cinema chains, and music companies, the majority of the cable and satellite TV systems and stations, all US television networks and the bulk of global book and magazine publishing. Barely 20 years ago, most of these companies, if they existed, didn’t even rank in the 1,000 largest firms in the world. Today, despite the recent decline of their market evaluations, these large media firms rank among the 300 largest corporations in the world.

Meanwhile there has been a significant technological convergence: previously distinct production environments and delivery channels have collapsed: it is now normal to listen to the radio on a computer and receive news headlines and images on a cell phone. Single companies now commonly control the entire chain from production to distribution across various media. Consequently, the content delivered to consumers has become increasingly homogenous. The dependence of all mass media, private or public, on advertising revenue creates the need to attract the one market segment most interesting to advertisers: the young, affluent, predominantly white middle-class.

The result is a homogenised and self-referential mass media space as parochial in its content as it is global in its form. Largely closed off to issues not attractive to its narrow target audience and opinions critical of its structure, mass media has become a powerful reinforcer of conformity on all levels, reinforcing stereotypes of normality and marginality. Only those who profit from the current system – the small number of parties among whom power rotates – are allowed to speak.

This latter point was addressed heavily during the 1990s. Minorities tried to get ‘fair’ representation of their particular identities in mainstream media. To some extent, this has been successful as some of them – gays, for example – were discovered as profitable market segments and easily integrated into the advertisement-driven logic. TV became more ‘colourful’ at the same time as the diversity of opinions it aired diminished. The ‘politics of representation’, by and large, failed as a progressive strategy. The other approach, mainly in the US, to construct alternative information channels on cable TV or radio has been only somewhat more successful, not least because these channels could reach only relatively small local audiences (with the exception of NPR and PBS in the US) and also because the economics of mass media production are not favourable to low-budget projects.

Against this background of homogenisation, a mass media system more closed than ever and controlled by powerful gatekeepers, able to restrict what can be transmitted through it, some key new media forms are emerging. Central to their development is the facilitation of new forms of collaborative production and distribution. In most cases, these media forms are enabled by the structure of the internet – the ‘network form’.

INTERNET: ARCHITECTURE AND CODE

The internet’s potential as an open media space – in which access to the means of production and distribution are not controlled centrally – is based on the particulars of its design (architecture) and its implementation (code), as Lawrence Lessig has argued extensively. On the level of architecture, the net’s ‘end-to-end’ principle has traditionally pushed ‘intelligence’ to the periphery, ensuring the routing of traffic from one end to the other and treating all traffic indiscriminately. Only the machine at the periphery (where someone is watching a video stream, for example) does the critical work of differentiating between different kinds of data. To the router responsible for getting the content across, it’s all the same: an endless stream of packets where only the addresses of destination and origin are of interest.

These features, key elements of the TCP/IP (Transfer Control Protocol/Internet Protocol), are now under contestation. IPV6 (Internet Protocol, Version Six), allows for the creation of ‘intelligent’ routers providing for a distributed regulatory environment – a danger given the fact that ownership of the net’s physical layer is also in the hands of relatively few corporations. A powerful coalition of business and security interests is working hard to gain control over this open infrastructure and, effectively, close it off. So far, they haven’t been successful, and end-to-end delivery still guarantees equality of transport (if not equality of expression) across the internet. This applies to content within a given format – an Indymedia web page versus a CNN web page, for example – but also also across formats – an email message versus an mp3 file – and, very importantly, extends to currently unknown formats.

In order to take advantage of this it is important that the protocols – the language in which machines speak to one another – are freely accessible. The internet’s early engineers understood this and consciously placed the key protocols (TCP/IP, SMTP, HTTP etc) in the public domain so that anyone could build applications based on them. It is the combination of a network that does not discriminate about which content it transports and the free availability of the key protocols that has allowed many of the most interesting innovations of the internet to be introduced from the margins without any official approval or any central authority – be that a standard-setting body like the W3C or ISOC, or a governing body like ICANN.

SOME PROTOTYPES:OPEN PUBLISHING

Among the first and still most advanced projects for media infrastructure are those focused on open publishing. The bulk of the published content is provided by a distributed group of independent producers/users who follow their own interests, rather than being commissioned and paid for by a editorial board and created by professional producers.

There are a great variety of open publishing projects, a few of which will be discussed later on, but they all have to contend with a fundamental problem: on the one hand, they need to be open and responsive to their users’ interests, or the community will stop contributing material; only if users recognise themselves in the project will they be motivated to contribute. On the other, the projects need to create and maintain a certain focus. They need to be able to deal with content that is detrimental to the goals of the project. In other words, ‘’noise’ needs to be kept down without alienating the community through heavy-handed editorialism.

The strategies of how to create and maintain such a balance are highly contextual, depending on the social and technological resources that make up a given project.

EMAIL LISTS: NETTIME

The oldest and still most widely used collaborative platforms are simple mailing lists. Among those, one of the oldest and most active is nettime, a project I know intimately as a co-moderator for the last 5 years. It was started in 1995 to develop a critical media discourse based on hands-on involvement in and active exploration of emerging media spaces. Its original constituents were mainly European media critics, activists and artists. Over the years, this social and regional homogeneity was somewhat lost as the list grew to close to 3000 participants.

An email list is, fundamentally, a forwarding mechanism. Every message sent to the list address is forwarded to each address subscribed to the list, and consequently, everyone receives the same information. This is a broadcast model, with the twist that everyone can be a sender. Unless the participant base is socially homogeneous and more or less closed, noise will be an issue, be it only because different people have different ideas of what the project should be. However, for individual subscribers there is no effective way to modulate the flow of messages to make it conform to their idea of the project. The issue of moderation, in some form or shape, is fundamental to all community-based projects, as it raises the question how to enforce community standards. The platform of the email list offers an extremely limited set of choices how to create moderation. The only way is to have all messages sent to the address go to a queue. The moderators who have access to this queue can then decide which message gets forwarded to all subscribers and which does not. The platform differentiates only between two social roles: normal subscriber and moderator. There is nothing in between. Subscribers see only those messages that moderators approve. Due to the broadcast model of the information flow, the moderation process needs to be closed.

Of course, this creates conflicts over what the community’s standard is, often expressed as an issue of censorship. Nettime, rather than upgrade the platform, opted to deal with this problem by creating a second, unmoderated channel, also archived on the web, to serve as a reference, so that everyone who wanted to could see all messages. The social sophistication of this technological choice was low. It addressed only a single concern – the lack of transparency in the moderation.In the end, the lack of technical sophistication can only be compensated socially, by trust. The community needs to trust the moderators to do their job in the interest of the community. The ‘blind’ trust is checked by the need of the moderators to keep the community motivated to produce content.

COLLABORATIVE NEWS ANALYSIS: SLASHDOT

Slashdot – founded in September 1997 – is a web-based discussion board, initially populated by the softer fringes of the US hacker culture, but now with a global appeal, though still clearly US-centric. Unlike most such projects, it is owned by a for-profit company, OSDN, and has a small, salaried staff, mainly for editorial functions, management, and technical development. Slashdot’s culture has been deeply influenced by two of the central preoccupations of (US) hackers: hacking, that is making technology work the way one wants, and a libertarian understanding of free speech. The two interests are seen as heavily intertwined and are reflected in the still ongoing development of the platform.

There is a sort of implicit consensus on the Slashdot community, but this also has consequences. Firstly, not all contributions are of the same quality and most people appreciate having a communication environment where noise is kept at a sustainable level. Secondly, different people have very different ideas as to what constitutes quality and what level of noise is sustainable. The phenomenon of ‘trolling’ – posting comments just to elicit controversy – is highly developed on Slashdot and has fostered several subcultures with their own particular charms.

The first factor requires some moderation facility, the second requires that individual users can modify the results of the moderation to fit their own needs.

Like the good hackers they are, Slashdot favoured practical solutions over ideological debates – such as still hamper most Indymedia sites when dealing with the free speech versus quality control / community standards issue. They set out to create what is today one of the most sophisticated moderation mechanisms for open discussion environments. Basically, there are two rating mechanisms, one performed centrally on site, and one ‘decentrally’ by each user. A team of moderators, selected automatically according to the quality of their previous contributions, rates each comment multiple times. The resulting average constitutes the rank – expressed as a value between -1 and 5.

Each user can define individually what rank of messages to display, for example, only comments rated 3 and above. In addition, each users can white- or blacklist other users and so override the moderation done on site and publish what is called a journal where she or he is in full control over the content.

Slashdot is highly user-driven. Not only in regard to the content, but also as to giving the users the ability to determine what they want to see and how, without affecting what others can see. While one user may choose to see nothing but the most highly ranked comments within a particular category, another user may positively relish seeing all posts in all sections. Slashdot has managed to create a forum with more than 500,000 users in which rarely a comment is ever deleted (usually a court order is necessary for this) without it becoming the kind of useless mess into which the unmoderated nettime channel declined. This is largely due to the greater social sophistication of the platform and its flexibility in modulating the flow of texts.

PEER-TO-PEER NETWORKS

The particular openness of the internet allows not only applications, which can be freely introduced within the framework of existing architectures, but also the creation of alternative structures either above or below the TCP/IP level. Collaborative distribution platforms take advantage of that by turning a decentralised client-server structure into a truly distributed peer-to-peer network. Changing the architecture that resides on top of the TCP/IP level is the approach taken by peer-to-peer (P2P) file sharing systems, such as Gnutella and Edonkey. The problem of the file sharing systems is less one of signal to noise, even though one of the counter-strategies of the content industry to disrupt these systems is to flood them with large junk files, hence introducing noise into a system that otherwise has been remarkably noise-free. The hostility to the environment of file sharing systems, then, is on the level of legality. Two key strategies are emerging to deal with this. The mainstream approach is to develop a system that keeps so-called illegal content out. Napster Inc., after losing a series of court trials, was forced to go into this direction, developing a system that would reliably keep out copyright-infringing material. Given the complexity of the copyright situation, this was a nearly impossible task; Napster was unable to satisfy the court order and completely disintegrated as a technical system and a company. Others have stepped up to assume Napster’s mantle but suffered either a similar fate, or are likely to do so in the future. At this point, it seems simply impossible to create an open distribution system that can co-exist with the current restrictive IP regimes.

Consequently, most commercial interest has been refocused toward building closed distribution systems based on various digital rights management systems (DRMs). This does not mean that there are no more collaborative, peer-to-peer distribution channels anymore. However, their approach to surviving in a hostile legal environment has been to devolve to such a degree that the entity which could be dragged to court disappears. Without a central node, or a company financing the development, it is less easy to hold someone responsible. Truly distributed file sharing systems like Gnutella are one approach, though there are still significant technical issues to be solved before the system becomes fully functional on a large scale.

Freenet (see Mute 17), the peer-to-peer network for anonymous publishing, has chosen another strategy. Here content is never stationary, in the way that URLs are stationary, but it moves around from node to node within the network, based on demand. Consequently, its location is temporary and not indicative of where it has been entered into the system. With all content encrypted, the owner of a Freenet node can reasonably claim not to have knowledge of the content stored on her node at a particular time, and thus avoid the liability of an ISP which is required by law to remove objectionable content when it becomes aware of it. So far, the strength of this strategy of shielding the owner of a node from liability for the content stored has not been tested in the courts, as the entire system is still embryonic. However, it is at least an innovative conceptual approach to keeping the network open and robust against (legal) attacks. COMMUNITY ARCHITECTURES: BOTTOMS UP

Changing the architecture that resides below the level of TCP/IP is the approach taken by the slowly developing wireless community networks such as London’s Consume. Wireless community networks substitute the infrastructure of the commercial telecom firms as the basis of data flows with a distributed infrastructure of wireless points that route traffic across a chain of nodes maintained by a (local) community. This allows, at least theoretically, the creation of local networks that are entirely open (within the community) and have fewer of the traditional constraints, legal or bandwithwise, which characterise conventional network architecture.

Consume’s bottom-up approach, in which individual community members are encouraged o maintain their own nodes, has not yet come to full fruition. Technical hurdles have proved substantial for all but the most dedicated geeks. In an environment already saturated with connectivity, this has been (near) fatal. Consume has not yet managed to gain the critical mass to sustain a real community. A different approach was taken by Citywave in Berlin. The groups involved in this structure chose to rely on a commercial provider to plant and maintain the wireless nodes, using the community as free beta-testers. However, in the prevailing harsh economic conditions, the willingness of the provider to support a non-commercial project with only limited advertisement potential dried up quickly and the project collapsed.

It is too early to say whether or not wireless community networks are doomed to become entries in Bruce Sterling’s dead media list or if they will take off under the right circumstances. What they demonstrate, however, is the possibility of generating autonomous infrastructures at the hardware level. Such structures will be especially important in the environment of microcontrol generated by IPV6.

OUTLOOK

The potential of autonomous media is substantial. The mainstream media landscape is bland and excludes such a significant range of the social, cultural, and political spectrum that there is a broad need to have access to a different means of producing and distributing media content. There is, now, real potential for the creation of a new model of media production/distribution not subject to the traditional economic pressures. The combination of collaborative, distributed modalities and autonomous infrastructures can allow new subjectivities and composing communities to emerge.

But such non-hierarchical collaboration, based on self-motivation, needs new strategies to reach a scale in which the output can really match those of traditional media production. The open source movement has already made steps towards this strategy, and certain ‘open’ publishing structures have achieved a widescale success. Slashdot as a point of publication has achieved the same, or even a higher level of visibility than traditional technology publishers. Wikipedia, at the least, has the potential to become a serious rival to popular commercial encyclopedias.

However, the need to sustain openness in a hostile environment demands further innovation in social organisation and technological tools. The danger is that openness becomes increasingly (and paradoxically) related to closed groups, fragmenting the collaborative media landscape into self-isolating sects whose cultural codes become increasingly incommunicable. The potential, however, is to give meaning to the somewhat vapid notion doing the rounds of late, that civil society should become the ‘second super power’. This will not happen unless we have a media infrastructure that provides a structural alternative to the media dominated by the powers that, currently, are.

Creative Commons [http://www.creativecommons.org]W3C [http://www.w3c.org]ICANN (Internet Corporation of Assigned Names and Number) [http://www.icann.org] (see also, JJ King, ‘They Came, They Bored, They Conquered’, Mute 20, 2001)Nettime [http://www.nettime.org]Slashdot [http://www.slashdot.org]Peer to peer networks & clients: [http://www.zeropaid.org], [http://www.infoanarchy.org], [http://freenet.sourceforge.net]Consume Project: [http://www.consume.net]

This essay is part of a year-long collaborative investigation into innovative media forms enabling cooperative discourse, which will also involve a series of public events to which key practitioners and theorists will be invited. For updates, see the General Intelligence Group website [http://www.g-i-g.org] or email <info AT g-i-g. org>

Felix Stalder <felix AT openflows.org> is a co-founder of Openflows working on the theory and practice of 'open source intelligence', the open, collaborative gathering and analysis of information. He spends his time in and out of Vienna and can be found at [http://felix.openflows.org]