Written for the Institute of Network Cultures
Crossposted at Institute of Network Cultures Weblog

Download PDF (full text + pictures)

On April 17th and 18th 2008 the department of Politics and International Relations at the Royal Holloway University of London (RHUL) organized Politics: Web 2.0: an international conference. The conference was large and diverse, with six distinguished keynotes, 120 papers organized into 41 panels, and over 180 participants drawn from over 30 countries. The big star of the conference was…. You!

Of course we all remember winning the TIME’s Person of the year award in 2006 for seizing the reins of the global media and, whilst working for nothing, founding the new digital democracy. TIME rightly observed a new trend in the Web – a shift that allows for bringing together the small contributions of millions of people and making them matter. We call it Web 2.0.

Web 2.0, coined by Tim O’Reilly in 2004, is the idea of mutually maximizing collective intelligence and added value for each participant by dynamic information sharing and creation. Web 2.0 includes all those Internet utilities and services which can be modified by users whether in its content (adding, changing or deleting- information or associating metadates with the existing information), or how to display them, or in content and external aspect simultaneously. The user generated online encyclopedia Wikipedia, the million-channel people’s network YouTube and online social network conurbations such as Facebook and MySpace are a mere few examples of the new web direction.

Though it may not be obvious, the road marks in Web 2.0 are political: grassroots participation, forging new connections, and empowering from the ground up. The ideal democratic process is participatory and Web 2.0 is about democratizing digital technology. It may therefore be relevant to ask if there has been a shift in political use of the internet and digital new media – a new Web 2.0 politics based on participatory values. Moreover, how do broader social, cultural, and economic shift towards Web 2.0 impact, if at all, on the contexts, the organizational structures, and the communication of politics and policy? Essentially, does Web 2.0 hinder or help democratic citizenship?

After an hour travel from London I arrived in Egham, a small town in the Runnymede borough of Surrey, in the south-east of England. The picturesque houses of Egham are home for a population of six thousand people. Just outside Egham is the Royal Holloway University of London which caters eight thousand students. The campus, which is set in 55 hectares of parkland, is dominated by its original building, known as the “Founder’s Building”, designed by William Henry Crossland and inspired by the Château de Chambord in the Loire Valley, France.

The department of Politics and International relations, Andrew Chadwick (Director) explains in the opening speech of the conference, was created to study the ‘new’ in new media technologies, such as the Internet, mobile technologies, and global TV. The main issue with new media phenomena is that they get over estimated in the short term and drastically underestimated in the long term. It is therefore essential to analyze and research changes in the Web without delay. The current accent of the web seems to be on social networking and sharing. Its success hints at possibilities for a working political and social system based on mutual respect for each other’s cultures, free of prejudice.

This article is divided in two sections: firstly I will discuss the keynote speakers; then in the second half I will discuss six case-studies. The article will be wrapped up with a short conclusion including comments on the overall event.

The keynote presentations include:

· Professor Rachel Gibson – Trickle-up Politics? The Impact of Web 2.0 technologies on citizen participation.

· Micah Sifry – The Revolution will be Networked: How Open Source Politics is Emerging in America.

· Professor Robin Mansell – The Light and the Dark Sides of Web 2.0

· Professor Helen Margetts – Digital-era Governance: Peer production, Co- creation and the Future of Government.

The case-studies include:

· Severine Arsene – Web 2.0 in China: the collaborative development of citizen’s rational discussion and its limits.

· Cuiming Pang – Self-censorship and the rise of cyber-organizations: an anthropological study of Chinese online community.

· Maura Conway & Lisa McInerney Broadcast Yourself: A History & Categorization of Terrorist Video Propaganda.

· Kostas Zafiropoulos and Vasiliki Vrana – An exploration of political blogging in Greece

· Paul Zube – VulnerableSpace: A comparison of 2008 Official Campaign Websites and MySpace.

· Rebecca Hayes – Reaching out on their own turf: Social networking sites and Campaign 2008.



Professor Rachel Gibson’s presentation ‘Trickle-up politics?’ concerned the impact of Web 2.0 technologies on political communication and citizen participation. ‘Trickle-up politics’ in fact refers to Reagan/Bush’s ‘trickle-down’ economic policy – which is used in political rhetoric to classify economic policies perceived to primarily benefit the wealthy and then ‘trickle-down’ to the middle and lower classes. What Rachel means with trickle-up is a bottom-up tactic, referring to the deregulated, decentralized political space that is the web. Rachel’s talk was particularly interesting because she set-out a concise historical trajectory to define the present-day web/politics.

Politics before the web – early 20th century through to WWII – can be characterized as being direct, localized and face-to-face. The town meeting, for instance, used to be an effective intermediate. In fact, Rachel continues, politics at this time had a ‘live’ quality, the emphasis was on an ‘in the flesh’ confrontation. Politics gradually became more mediated and indirect between WWII and the turn of the century. With advancement in electronic mass media, the position of the mediator increasingly became independent and subjective, as well as a critical factor in the election outcome. Hence, personality driven candidates have become vital in persuading publics to vote for a party, consequently parties lost their supremacy. Franklin D. Roosevelt’s fireside chats in the 1930’s and the first televised presidential debate in 1960 – John F. Kennedy versus Richard Nixon – are two defining moments, or as Rachel calls them, seeds of change.

In the period between 1990 and 2004 the Internet progressively became a consumer friendly domestic commodity, and with it political communication found a new medium, one with a potential to evade sound-bites and negative ads. Of course the Internet had a long history prior to the emergence of the WWW. It is debatable when exactly the WWW was invented, however, one common date is 1990 when TBL published ‘Proposal for a hypertext project’. The immediate consequence for political communication was an increase in speed, volume, and individual user control over consumption and production. Moreover, it provided a new way of targeting and allowed for ‘narrowcasting’. The internet opened a decentralized control structure and offered the user new forms of interactivity, putting an accent on multi-media formats.

The expectations were high; in ‘The Virtual Community’ (1993) Howard Rheingold wrote that “the future of the Net is connected to the future of community, democracy, education, science and intellectual life… The political significance of CMC lies in its capacity to challenge the existing political hierarchy’s monopoly on powerful commercial media, and perhaps thus revitalize citizen-based democracy.” Nicholas Negroponte wrote in ‘Being Digital’ (1995) that “as we interconnect ourselves, many of the values of a nation state will give way to those of both larger and smaller electronic communities. [there is] …A decentralized mindset growing in our society, driven by young citizenry in the digital world. The traditional centralist view of life will become a thing of the past.”

And in 1998 Esther Dyson wrote in ‘Release 2.1: a design for living in the digital age’ that for her “the great hope of the Net is that more and more people will be led to get involved with it, and that using it will change their overall experience of life… The Internet is a powerful lever for people to use to accomplish their own goals in collaboration with other people. Its more than a source of information, it’s a way for people to organize themselves. It gives them power for themselves. Rather than over others.”

But, then, what did all this buoyancy bring forth? Rachel answers by showing slides of Tony Blair’s incredibly meager home page from 1995, plus some other laugh-raising political campaign sites familiar to British voters. Obviously it takes time to master technological innovation, Rachel notes. Then, in 2004, came web 2.0. The technological definition of Web 2.0 is that the web functions as a platform, supplanting the desktop and PC. The browser is now the key tool to access a suite of new increasingly interoperable applications that work behind the scenes to link up a wide range of online functionalities – i.e. manage a home page.

At its core, this frame refers to social and participatory elements of the web: communicate with friends, share/publish pictures, and receive news. Web 2.0 is based around social networking activities as it relies on and is built trough ‘social or participatory’ software. Typical applications are blogs, wiki’s, social networking and file sharing sites, such as Myspace, Facebook, YouTube, and Flickr. Hallmark of these applications is the way in which they devolve creative and classificatory power to ‘ordinary’ users. In a nutshell Web 2.0, as defined by blogger Nicholas Carr, concerns “the distribution of production into the hands of the many”.

But what does it mean for politics? It is more and more difficult to identify media ‘effects’ at the individual and collective/societal level. We therefore need new methods and data to capture how and why people are using the technology. The Web is becoming an ‘environment’ and a context. Where it is probably having most effect is in changing the culture of participation particularly among younger people. However, Rachel argues, we are not at the stage yet where we can definitively point to changes in citizen participation. Yet, there are significant signs of a shift taking place coming from recent elections in the US, France, Australia and beyond.

Emergent trends include the blurring of boundaries between users and producers, causing what Rachel calls an ‘amateurization’ of politics. On the other hand politics is speeded up; Rachel observes a ‘quickening’ of coordinating citizen demands and responses, fostered by tools like MySociety and Central Desktop, hopefully leading to a more open form of decision making. In addition, the boundaries between public and private are blurring, which causes an ‘informalizing’ of politics. Furthermore, Rachel notes a pluralizing and disaggregating of choices, hinting at a long tail of politics. In politics the long tail has been talked about in terms of tapping small donors, but she argues that it also applies to people’s discrete interests and the opportunity to respond to more than the top four survey items in a poll.

In this sense, Rachel’s ‘trickle-up politics’ refers to diffused and decentralized individualistic micro-networks that are continuous, citizen-based in a non-institutional setting, and characterized by niche audiences. So, where do we go from here? While we ponder the nature of politics associated with the Web 2.0 era it is interesting to think about what the next shift might be. Web 3.0? If Web 1.0 relates to a receive/read mode and Web 2.0 includes a send/write mode (user generated content), then Web 3.0 could very well be, Rachel reasons, a more immersive mode, for instance create/speak/act. So, does this mean we will all be having avatar-to-avatar fire-side chats with upcoming politicians in Second/Third Life?

Politics 2.0 – Open source campaigning

Since the 2004 United States elections, the internet has become much more participatory and interactive with the popularization of Web 2.0. This participation, the idea goes, lends new currency to the notion that these technologies can be employed to allow citizens to ‘reprogram’ politics. One of the earliest examples is the way that the Macaca video spread virally through the internet on YouTube and contributed to the electoral defeat of Senator George Allen of Virginia during the 2006 U.S. midterm elections. The old ethics of politics allowed candidates to get away with making ad lib comments if journalists did not pick up on them, but services such as YouTube have changed that, and now politicians must be more careful not to say things that will come back to haunt them.

Various Internet prophecies involve a new wave of fashionable democracy as fundraisers meet on MySpace, YouTubers crank out attack ads, bloggers do opponent research, and cell-phone-activated flash mobs hold mini conventions in Second Life. Open source political campaigns, Open source politics, or Politics 2.0 are about the idea that social networking and e-participation technologies will revolutionize our ability to follow, support, and influence political campaigns.

In The Nation (2004) Micah Sifry wrote open source politics means “opening up participation in planning and implementation to the community, letting competing actors evaluate the value of your plans and actions, being able to shift resources away from bad plans and bad planners and toward better ones, and expecting more of participants in return. It would mean moving away from egocentric organizations and toward network-centric organizing.” Since Micah’s article, the term has appeared on numerous blogs and print articles. Micah was invited to talk about open source politics and how it relates to this years US presidential election.

Micah’s perspective on politics and the revolutionizing authority assigned to the network, provided for some fascinating insights. According to Micah political communications must move from being egocentric to network centric; less about individuals and more about loosely connected networks of supporters that unite and self-organize around specific issues, allowing voters to become co-creators of the political campaign and outcome. Micah’s presentation was named ‘the revolution will be networked’ and concerned voter-generated content, donations, and a potential retreat from sound bites (or the even shorter sound ‘barks’).

Because of the interactive quality of modern campaign sites (comments, polls, upload options), users currently are co-creators of campaigns. This network of users, Micah argues, makes modern campaigns not solely about getting donations or votes; issues can be discussed in depth. Obama’s top 10 YouTube clips are on average 13 minutes long (with approximately 900 videos posted). These videos get millions of hits, which is unique because YouTube only registers a hit when the video is watched completely. The Race Video has had over four million views, demonstrating that there are a lot of people interested in in-depth content that without the Internet cannot be obtained.

The Internet opens up meaningful spaces and changes traditional processes. For instance funding is done in new ways; Ron Paul opened up his funds by putting all his campaign donations online. The database of donations was entirely searchable. Supporters started expanding the site with useful tools, for instance, graphs that displayed funding from specific places, organizations or persons – they then set-up the website ronpaulgraphs.com; the result can be considered a form of open source donors in real-time. With micro-economics emerging on the web, big money doesn’t go away – but now there is a counter force. The mobilizing force of the internet allows for a long tail of donations, potentially assigning power to the people. Those who are only able to donate a small amount and thus generally have little or no authority, can mobilize via network technologies and have a say at what direction a candidate’s party should take, as an alternative to the established domination of the corporation.

The voter generated content, Micah emphasizes, is not solely about raising funds; the contributions extend to full scale voluntary operations. Great examples are “Vote Different” video from Obama supporters and the new “VoterVoter” site, where citizens can develop their own ad and pay to have it placed on TV. Micah believes there is a shift in centrality; the focus is on the user. This shift is evident in the importance of MySociety.org and its toolset for citizens to monitor and exert pressure on government. Obama seems to understand the network power better than the other candidates before him and still in the race; his campaign site is all about providing a channel or a portal to other users and sites, not necessarily trying to control them. The heart is the user.

To get to a position of open source politics we need to give supporters authority. To what extent is this achievable and smart? Ron Paul supporters were given full authority to shape his campaign, but then they raised money to spend on a branded blimp – as it turned out not the most efficient course. A more interesting question is what happens to the network and peer production after the candidate is placed in office? And where will the balance of power lie? Once you have given supporters/voters a sense of power, they probably won’t let it go so easily. The speeding up of politics: this quickening of coordinated citizen demands and responses, fostered by tools like MySociety or Central Desktop, will this lead to more open decision making? What about collaborative government?


According to Professor Robin Mansell (New Media – London School of Economics) we are on our way to collective intelligence. The Web 2.0 ideology demonstrates a new narrative and an end of hierarchies. The new narrative, which is put forth from end-to-end networks, is an astonishing emphasis on cooperation ascendant over competition. Information wants to be free.

When thinking about technology from a bureaucratic or a scientific perspective, it is important to ask if convergent and divergent interests in capitalism and democratization are characterized by superficial or fundamental change. Robin notes that historically, shifts in power have been partial and often local, in their consequences we should expect the same in the Web 2.0 age. In order to study ongoing transitions and affect Robin sets out the ‘light’ and ‘dark’ side of Web 2.0. Her presentation was not so much an attempt to close things down and determine all facets of the Web 2.0 phenomenon; but instead aimed to stimulate speculation, further empirical research and a call for governmental involvement.

The success and achievability of Web 2.0 can be explained by steady increases in information connections and social connections over the past decades. Historically the web can be grouped in the PC Era (1980-’90), Web 1.0 (1990-2000), and Web 2.0 (2000-’10). The PC Era commenced as PC’s and in particular the desktop became a household commodity, however, the stand alone character of the PC made it lack information and social connections. Web 1.0 is identifiable by the World Wide Web and although there is an increase in social and information connections, the web is still, at this stage, focused on databases, static websites and one-way communication. Web 2.0, on the other hand, does have a strong focus on user-generated content and social media sharing. Assuming this trend will continue, the upcoming Web 3.0 (2010-’20) is thought to focus on semantic databases and distributed search and Web 4.0 (2020-’30) characterized by intelligent personal agents.

Today, being profitable on the Internet means reliance on user-generated content. Large multi-nationals have come to understand the power of the mass; by winning them over with innovative interactive tools and integrating their creative and immaterial input. Successful businesses respect the small contributions of the multitude and adjust the communication and production structure accordingly; slowly businesses are implementing a horizontal, bottom-up organization. Web 2.0 embodies this change and in this respect stands for the emancipation and an end to repression; everyone’s contributions matter, everyone is listened to and – this is different in a traditional disciplinary organization – you are stimulated to actively participate/volunteer in fine-tuning the social/corporate order.

At first glance Web 2.0 primarily seems to be about upbeat, optimistic and emancipatory qualities. However, there are a lot of negative aspects to be considered in the same way. It is for instance an addiction that consents to collective intelligence; mass collaboration is achieved by encouraging people to get addicted to new media practices. Children, adults and the elderly need to be active in order to belong to the ‘new society’. Those that do not contribute and participate are automatically excluded. All our daily practices slowly seem to be reliant on new media technologies. Being part means to be addicted.

Currently active audiences are participating in television shows via sms, speaking out their every day frustrations on blogs and tweaking their profiles on social networks. What this brings, however, are new forms of competition. Companies are competing in who has the most people working voluntarily for them. This obviously raises juridical questions concerning labor and compensation. In addition, Robin continues, mass collaboration mostly occurs within a circle of friends. This means that the focus is inward looking and therefore not as open as many optimists proclaim. Furthermore, Robin notes, adverts increasingly get mixed with editorials. Trust is devaluated by an overload of information. The gatekeepers of information, the editors, moderators and monitors are ‘you’; hence it is increasingly difficult to dependent on one source. Mass collaboration might be a way to collective intelligence, it is predictably also a road to mass confusion.

In the end the scarce resource is data/info management capabilities and time for servicing ourselves. What we need, Robin asserts, is more speculation and empirical research; a turn to governance of communicative spaces in ways that encourage active passivity; a turn to achieve control over data/info management – the driver of the economy, Web X.0 and political outcomes. Bottom line is to understand that network effects are not neutral for the economy or for democratization.


In the public administration debate about new public management (NPM), Professor Helen Margetts (Society and Internet, OLL, UK) claims traditional themes of disaggregation, competition, and incentivization are worn out. Although its effects are still working through in countries new to NPM, this wave has now largely stalled or been reversed. Helen sets out the case that a range of connected and information technology-centered changes will be critical for the current and coming wave of change. The overall movement incorporating these new shifts is toward “digital-era governance” (DEG), which involves reintegrating functions into the governmental sphere, adopting holistic and needs-oriented structures, and progressing digitalization of administrative processes.

DEG has three key elements – reintegration (reversing fragmentation, joining up, re-governmentalization, new central processes, squeezing process costs, simplification, bringing issues back into government control, like US airport security after 9/11); needs-based holism (client-focused structures, end-to-end redesign, one-stop processes, co-production, agile government, reorganizing government around distinct client groups); and digitalization (electronic delivery, centralized procurement, new automation, disintermediation, open-book governance, web2.0 for government, fully exploiting the potential of digital storage and Internet communications to transform governance). DEG offers a perhaps unique opportunity to create self-sustaining change, in a broad range of closely connected technological, organizational, cultural, and social effects.

The backlash, however, is a move to a digital super state, in which information and organization is chaotic and lagged. Research concerning UK government representation and recognition on the internet shows that users rate government websites reasonably well but quality has improved little since 2002, design is text heavy, public sector sites lack innovation (particularly Web 2.0) and popular features of good private sector sites. Furthermore, Central government websites cost 208m pounds annually (estimated) – but some deparments/agencies still have weak information on costs/usage of online provision and many lack channel strategies. UK government has embarked on a high risk ‘supersite’ strategy, Helen continues, to centralize e-government provision in two sites – Directgov and Businesslink – which have low brand recognition and problems competing with other information sources.

Helen states management culture for digital-era governance should include the use of pervasive information; it needs to de-couple information analysis from control (contrast to targets-based culture); be customer orientation and segmentation, with attention to channel strategy; and use pro active and experimental tools. A citizen culture for digital-era governance could entail an ‘isocratic’ government which helps citizens do it themselves; stimulate co-production and peer production. Essentially Web 2.0 should run for government.

The only problem with a potential Web 2.0 candidacy is that the cultural vibe in government is that only ‘old-fashioned’ Web is easy to use, and the “government doesn’t do cool”, in fact, “it’s only working if it’s boring” (i.e. all on-line communication is text-based). Governments avoid part-authenticated information and para-state involvement – “we stand alone; we don’t integrate into society’s networks”. The general idea is that people will come to the government site and can be directed to government sources of information.

The risk of Web 1.0 in government is that it ignores young people at peril – internet change is lead by them. Planning for text-only communication, Helen argues, leads to disastrous under-investment. Moreover, people go where they want to go, with increases in competition, a focus on Web 1.0 will bring forth a net loss of visibility for government – loss of ‘nodality’ (information dissemination) as policy tool.

Web 2.0 could provide the government with rich information and content (not just text) – video, pictures, audio, podcasts, high-intensity graphics (e.g. video games). Conventional information asymmetries can be reversed with a highly specific ‘deep’ search. Also, Web 2.0 allows users to play back information about what they do and how they feel. It can offer part-finished products (e.g. part-authenticated information) to leave for e.g. experts outside the government, allowing for co-production, leading to co-creation, and ultimately making users enter the front office. Web 2.0, Helen adds, offers strong customer segmentation – opening space for social networking (peer production) – possibly involving a wide range of organizations – 3rd sector and private firms.

A 2.0 approach in the health sector, for instance, will permit performance data to be freely available, not only leading to peer-production amongst health-experts, but also offering a direct voice for the patient. This may socialize the manager to be customer orientated. So, the patient input replaces controls.


In between the theoretical lectures of the keynote speakers, the conference covered 120 case-studies organized into 41 parallel sessions. Naturally it was not viable to attend all 41 panels within the restricted occasion. Still I was able to attend an especially exciting selection. Such as Severine Arsene’s and Cuiming Pang’s talk on collaborative development of citizen’s discussions and self-censorship in China – outlined in the next section.

What’s more I will discuss Maura Conway and Lisa McInerney’s research concerning terrorist video broadcasts. After that I will discuss Kostas Zafiropoulos and Vasiliki Vrana’s study of political blogging in Greece. Followed by two sections about social networking sites and its usage in the U.S. 2008 Presidential campaigns; I will write about Paul Zube’s study of what he calls ‘vulnerable spaces’ and Rebecca Hayes’ research results regarding social networking sites. In the last part I will wrap up the article and give commentary on the overall conference.


According to Severine Arsene (Science-Pro/Orange Labs, Paris) 210 million Chinese internet users share and tag videos and make use of Web 2.0 applications. Moreover, with the rise of an urban and connected “middle class”, there are more and more discussions taking place online. The content is mostly concerning cars, flats, salary and dogs – in other words lifestyle and values. More interesting are Severine’s observations from fieldwork and interviews with internet users in Beijing.

Apparently there is a wide range of popular debates on morality issues, corruption and other social scandals, making one wonder how China’s strict censorship rules will adopt. Severine states that between harsh nationalism and moral indignation, self-regulation and responsibility, moderators as well as users are collectively elaborating formal and informal rules of politeness, and setting new criteria of objectivity. Censorship and control might be self-regulating at the time, the question is, to what extent is it an effect of to the top-down decision-making norm that is China?

Closely related to Severine’s talk was Cuiming Pang’s (University of Oslo, Norway) presentation concerning self-censorship and the rise of cyber-organizations. Cuiming’s results were based on an anthropological study of a Chinese online community: Houxi Street. According to Cuiming the broad use of Web 2.0 applications in Chinese cyberspace, has provided a platform for individual exhibition and open communication, created a new type of social participation, and facilitated the proliferation of cyber collectives in recent years. It is evident that collective action is more influential in spreading public opinion and organizing public activities than is separated and unorganized individual action. However, Cuiming adds, when faced with the threat of a more powerful authority, a grassroots collective would possibly become more fragile than the individual, and is liable to compromise in order to avoid complete annihilation

Cuiming’s observation of the Chinese online community and in-depth interviews with informants both on- and offline, tell a story about internet users and internet service providers’ perception of and reactions to the Chinese government’s censorship, especially regarding how they learn, perceive, and practice self-censorship. Cuiming argues that many Chinese cyber collectives organized in the format of online communities tend to withdraw collective rather than fight for free speech when they encounter the government’s censorship. Even though there is a wide range of criticism towards the government’s political suppression, the community managers still learn and practice self-censorship, rather then taking risk to challenge the government authority, for fear of penalties.

In addition, because technical censorship is complicated and expensive, the focus is on soft-censorship. Cuiming calls this social moderation; community managers tend to establish a friendly relationship with ordinary users, and adopt strategies of negotiation and dialogue rather than restrictions and sanctions, to remind users to be cautious of their own behavior. What this brings is users spontaneously helping managers, and collectively maintaining and protecting the community, ultimately making it easier for the government to practice internet censorship (and more difficult to become more democratic). Well, let’s put it this way, Cuiming had to go to Oslo to study Chinese censorship…

A History & Categorization of Terrorist Video Propaganda

An interesting approach of the history and categorization of terrorist video propaganda was set out by Maura Conway and Lisa McInerney (Dublin City University, Ireland). Maura and Lisa have observed a trend of violent jihadis and their supporters worldwide that are exploiting internet technology to pursue an extensive and cutting-edge media campaign. Jihadi media outlets are influencing perceptions of the wars in Iraq, Afghanistan, and elsewhere among large chunks of the Arab population and, increasingly, also further a field. Video products arising out of the Iraq conflict in particular, Maura and Lisa add, are a key asset for jihadist media worldwide, which employ materials produced in/about Iraq to underline their broader message.

Their presentation traced the ‘history’ of video technology and its use by terrorist organizations: from Hezbollah’s use of ‘camera crews’ to record their attacks on IDF troops in South Lebanon in the 1980’s to the ‘martyrdom videos’ produced by Hamas and other organizations in the 1990s, and from the establishment of al-Qaeda’s al-Saha productions to the ‘do-it-yourself’ contributions widely available on YouTube today. Particular attention was given to the types of jihadist video currently being produced and attempt to broadly categorize these.

Maura started by saying there is a relation between the emergence of new technologies and terrorism. For instance the print set off further forms of terrorism, mobilization and propaganda. The television satellite in 1968 enlarged this process. Imagery, a central aspect of television, is far more persuasive. Hezbollah immediately began to use its power, but, and this is an important fact, the power of the press is limited to who owns it, and because Hezbollah could not own its own television station (before Al-Manar, 1991), its power was limited to who showed their actions. Consequently Hezbollah began broadcasting themselves in the 1980s using ‘camera crews’ to record their attacks on IDF troops in South Lebanon. It was the first form of self-broadcasting.

In the late 60s and 70s hijacking attention became an effective means to draw the attention of television stations – i.e. Black September (PLO). The hijacking genre, Maura states, is the central means to propagate awareness within the television medium. Hijacking videos currently, like its medium, stands for the traditional, the old and the past. Hezbollah’s self-broadcasting activities in fact paved the way to its broad application currently on the internet. There is a wide variety of propaganda videos now residing on such channels as YouTube and LiveLeak. Juba Baghdad Sniper, for instance, is a famous example.

Juba is an Iraqi sniper who has his actions filmed. The videos show unaware American soldiers being shot from a large distance. The videos that contain soldiers falling to the ground are the most popular; some of them have been viewed more than 300,000 times. What makes contemporary propaganda videos different from those broadcast via satellite/television is the co-creative value. Many of the Juba videos have been edited by other users in order to enhance the essence, for example by putting a red circle around the victim prior to the shot, or adding a slow motion filter and repeating the moment the bullet hits the soldier. Another common user generated add-on is subtitles (in English), or a written overview of an up to date body count. The Juba videos are modern propaganda videos aimed to convince viewers around the world that Iraq’s people will not give up and in fact are winning the war.

Juba is just one example of an effective Web 2.0 propaganda video. Maura and Lisa have established seven different propaganda video types on the Internet: political statements, beheadings, attack footage, living wills, instructional, memorials, and the music video. The beheading videos popped up since 2004 and are considered new. In the past videos containing such gruesome aspects as stabbing and detaching body parts would not be broadcast via satellite. The global and ostensibly anonymous character of the Internet makes it a medium to rapidly reproduce virtually any type of content. Beheading videos primarily are intended to provoke shock and demonstrate devotion to both local and Western viewers. Similarly the living wills characterize a global aspect; they are meant for an international audience and speak to non-Muslims.

On the other hand instructional videos are mostly Muslim-oriented. The genre can be divided in theological and operational instructions, such as for bomb making and transport systems. The latter category are not always accurate, they often miss vital information. There are videos circulating the internet with directions in how to make an IED, however they will regularly be ineffective when used in combat. Possibly these incorrect videos are placed on the Internet by Americans/Europeans to cause confusion (produced or re-edited in the West), or are spread by people who lack fundamental understanding, but pretend/believe they do.

Also the memorial videos are mainly distributed amongst Muslims. The content acts as a virtual tombstone and is considered to hail the victim. Lastly there is the jihad music video. The style is rap. Some popular videos get more then 125000 hits. The music video, Maura and Lisa assert, is aimed to target the youth in many countries. Not only are users of the Internet commonly younger generations, rap music in general has an international and youth appeal; it acts as a universal fashion.

Maura and Lisa observed that production is becoming more professional and is vastly multiplying. This has to do with advancements in technology and the global participatory quality of the Internet. There are now even dedicated media production units: Al Saha/As Sahas and Islamic state of Iraq (ISI). In addition there is the do-it-yourself amateur on YouTube who collaboratively create videos, branding, mimic each other, and cause rivalry (leading to snipers similar to Juba going into the streets with more successful kills on their name).

Lisa and Maura conclude that there is a diffusion of power downward. Video are integral to Web 2.0, easier to access, highlight targeting of younger generation, and make use of the persuasiveness factor of the image. Web 2.0 makes it that you do not need your own website, now you have multiple platforms at your disposal. Finally, Lisa and Maura note, there has been a big shift these past 40 years; print had little persuasive value and could only reach literate people, satellite television (1968) had far more power but lacked distribution (airing of videos depended on who owns the station) and grassrooted control, this evolved in a period of 40 years to co-created easy accessible videos in seven established genres.

Politics of Blogging in Greece

Kostas Zafiropoulos and Vasiliki Vrana (both University of Macedonia) presented an exploration of political blogging in Greece. Their research was based on a sample of 1367 Greek bloggers.

Blogs have the advantage of speedy publication and in socially constructing interpretive frames for understanding current events. Blogs appear to play an increasingly important role as forum of public debate, with knock-on consequences for the media and for politics. In Greece where the ratio of internet users is relatively small there is however an expanding portion of bloggers who comment regularly and have the power to a certain degree and in certain circumstances to trigger off political movements. Based on the relative literature, Kostas and Vasiliki use Technorati.com in order to track Greek political blogs and provide indicators of their popularity and interconnections. Additionally the aim of the case-study was to test whether the hypothesis of Drezner and Farell (2004) – Skewedness of incoming distribution and formations of core blogs – apply for Greek political blogging.

Drezner and Farell argue that blogs with large number of incoming links offer both a means of filtering interesting blog posts from less interesting ones, and a focal point at which bloggers with interesting posts, and potential readers of these posts can coordinate. When less prominent bloggers have an interesting piece of information or point of view that is relevant to a political controversy, they will usually post this on their own blogs. However, they will also often have an incentive to contact one of the large ‘focal point’ blogs, to publicize their posts. The latter may post on the issue with a hyperlink back to the original blog, if the story or point of view is interesting enough, so that the originator of the piece of information receives more readers. In this manner, bloggers with fewer links function as ‘fire alarms’ for focal point blogs, providing new information and links’.

Currently 40% of the Greek population uses Internet (with percentages being higher among young people and men). According to Karampasis (2007 http:ereuna.wordpress.com) blogging started to expand during 2002-2003 in Greece. There are currently 9610 blogs written in Greek, but only 4639 of them are active. The content includes multiple subjects – with an emphasis on personal interests, art and culture, and entertainment (news and political subjects are rarer). The majority receives less than 100 visits daily, and perhaps as a consequence, do not have any advertisements. The typical Greek blogger is a male (64%) with a college education around the age of 30, and lives in Athens (53.1%) , Thessaloniki (12.4%), or resides abroad (11%). Mostly bloggers tend to use the medium for the purpose of keeping a diary, experimenting, taking action while being anonymous, or creating of a community. 38% of the bloggers consider blogging to be a form of journalism, while 51 does not.

The case-study examines the posts of blogs that were about George Papandreou (the former and current President) and Evaggelos Venizelos (contender) during the period prior to the general elections – from September 16 to November 13. The blogs that were examined contained posts linking to the two candidates sites/blogs. Blogs connectivity, closeness and variations over time were the main characteristics of this investigation. In addition, the research discusses skewedness of the blog incoming links distribution and how this is affecting the formation of central or core blog groups, which serve as focal point blogs. Central in the methodology was recording blogs (from friends and followers, party members, dedicated blogs, non political commenting), link from blog rolls, and affiliation of blogs.

The results, Kostas and Vasiliki argue, demonstrate that political blogging in Greece although limited, conforms to the characteristics described in the literature regarding political blogging. Blogs may frame political debates and create focal points for the new media as a whole. In this way, blogs sometimes have real political consequences, given the relatively low number of blog readers in the overall population. Skewedness of incoming links distribution and the formation of core blogs have on the provision of information and discussion. Empirical evidence from Drezner and Farell is also reproduced in the present analysis. Greek political blogs act within a social network of blogs, which form authority core groups where the discussion is taking place. Political affiliation is partly reflected on the formation of blog core groups. Because of this, it is easier for citizens that need information to coordinate and find where the interesting debate is taking place.

message and image control on myspace

Each election provides researchers studying politics with rewarding material, especially in the last decade; in each election political candidates have made use of new web technologies to reach out to voters. With the 2008 U.S. presidential election looming, Paul Zube states, it appears that social networking sites (SNS) will be the newest web tool utilized by candidates.

Paul’s research examines the ways in which campaigns are making use of one particular SNS, MySpace. MySpace is a popular SNS in the U.S. with a relatively young population of users. This represents an interesting strategic move by U.S. candidates as they have traditionally put little effort into courting young voters, especially as young voters are infrequent visitors of the polls. To study how candidates are using MySpace, two approaches were used. First, the 14 candidates that had active MySpace accounts in the spring of 2007 were “friended” by the researcher to allow full access to the candidates’ spaces. The MySpace and official website spaces of these 14 candidates were then frequently observed during a one month period. Particular attention was paid to differences in content and useable site features. In addition to this comparison, the comments posted on cadidates’ MySpace pages were analyzed. This, Paul adds, provides a glimpse into the potential interactivity promise of SNSs.

The results of these methods found that there are significant differences between the official website presence and the MySpace presence of candidates. The use of MySpace seems to represent a relinquishing of control by campaigns. Although this may be encouraging for those interested in the deals of democratic governance, it is a counterintuitive strategy for the candidates. Candidates have historically sought the maximum electoral benefit from the minimum image/message risk; whereas, SNSs seem to represent a great risk with potentially very little electoral benefit.

Paul start by explaining how the candidates website traditionally acts as a surfacing stage, allowing the candidate to become visible, create a name recognition, establish a personalized image, spread the core message, and ultimately call for funds and votes. At the same time the level of control allows the website to avoid early miscues and build the moment.

Paul believes there are plausible reasons to assume that SNS might be different from previous web campaign tools. Namely, campaigns are not directly in control of structure, moreover, SNS acts as a 3rd party management. Also, the Web 2.0 character makes it difficult to control content supplied by users, meaning that also the interaction with candidates is not filtered. So, is message control compromised in MySpace? Paul asks.

There are several differences between website and MySpace to be considered. Websites are business as usual, Paul says. They are about informing, mobilizing and engaging. Websites are polished and professional. MySpace on the other hand is near uniformity in layout, it contains sporadic content and is non-informing. MySpace is similar but more image focused and the information is personal.

Commenting is an essential part of MySpace and SNSs alike. Paul has established five types of contents: gratitude alone (thanking for accepting ‘friendship’), support (“I am glad you are running), intention to act (“I will vote for you”, challenge (explain such and so – which is never answered by candidate or other users), and spam. The latter actually has initiated some embarrassing situations for candidates; for instance spam adverts concerning illegal drugs are out of place on the site of a candidate who is running a strong anti-drug policy. This and the unfiltered user generated content place candidates at significant risk, making Paul wonder why candidates draw on MySpace.

Candidates jump from one medium to another constantly, yet the challenge of spreading the candidate’s message and image seems minimally rewarded. Not all “friends” on SNSs can vote and MySpace especially has demographics skew very young. History says, Paul adds, they will not vote nor contribute. Candidates seem to use the SNS medium, Paul concludes, to “stay trendy”, it is what is expected of the constituents. Accordingly there seems no motivation by candidates to use the medium for grass-rooted decision making or augmenting democracy, they are simply in it for the votes.

Social Networking Sites and U.S. Campaign 2008

Following Paul’s presentation, Rebecca Hayes (Michigan State University) talked about social networking sites and its usage to reach out to younger audiences. Internet social networking sites are becoming an active forum for participation in politics in the United States, with nearly every candidate in the 2008 presidential primary having a profile on the major SNSs of Facebook and MySpace. One of the main demographics of these sites, individuals aged 18-24, is known to be largely apathetic towards the political process and has previously demonstrated a low level of engagement in politics. While candidates are obviously expending significant resources to reach out to these young voters online, through both SNSs and Web Sites, little is known about the attitudes of this group towards these attempts and how they may impact intention to vote.

Voters are most likely to establish political attitudes and habits, Rebecca continues, by the end of their college careers. For an attitude to form and internalize towards voting or a candidate, the source of the information the attitude is based on must be credible. Additionally, to promote civic participation, an individual must possess political information efficacy, the belief that one has the knowledge to participate. In order to determine the attitudes of young voters (18-24) toward presidential candidate presence on social networking sites and to take the first steps toward determining whether exposure to candidate SNSs can increase participation of young voters, she (together with Paul Zube and Thomas Isaacson) studied the Facebook and MySpace profiles and Web sites of six candidates.

Before explaining how the study was conducted, Rebecca shortly explains that U.S. publics are historically inactive voters. In fact 21-51% of eligible voters actually vote. This mainly originates from constituents to be uniformed; for instance, age relevant information is lacking in campaigns. Other reasons, Rebecca adds, are apathy and voters being too online centric. This might shift as campaigns are more focused on social networking sites, consequently reaching out to young voters, web users becoming more likely to vote and be informed. The web is becoming more interactive per election; in 1996 websites were brochure-like, now they are very interactive and socially networked sites (i.e. John McCain’s site is surprisingly interactive). So, will this translate into greater participation by young voters?

The research followed two theoretical models: the Elaboration Likelihood Model – which describes how attitudes are formed and changed along an elaboration continuum (low-high) – and Political Information Efficacy (PIE) – which asserts that one possesses the knowledge to effectively engage in politics; those with low political information efficacy are much likely to vote; younger voters have much lower PIE than older voters; and exposure to, and interaction with, interactive web campaign material can increase PIE.

Therefore, possible hypothesis include that politically uninvolved young people will find candidate social networking profiles more credible sources of information than will politically involved young people. That heavy users of social networking sites will consider them a more credible source of candidate information. That exposure to candidate social networking profiles will increase intention to vote among politically uninvolved young people. And that exposure to candidate social networking profiles will increase political information efficacy among young people.

The researchers designed an online post-test experiment with control, which measures of the SNS use, the intention to Vote, and the exposure to Face Book, MySpace, Websites, or the control. Additionally experimental groups were asked about impressions of treatment in closed-ended questions and using validated scales of interest/involvement, credibility and PIE. Furthermore, the research included a content analysis by means of open-ended questions to seek initial impressions of exposures. The sample consisted of 411 undergraduate students across four majors (all from the same institute). The results were meant to determine the attitudes of young people (18-24) toward candidate social networking profiles.

The actual results showed websites and Facebook to be more credible than MySpace. Between websites and Facebook there was no significant difference, but this is only moderately credible; usually colleges and universities belong to one SNS – there are Facebook oriented colleges and MySpace oriented ones (depending on where most classmates are). The open-ended responses were overwhelmingly negative; 50% didn’t like candidates on SNSs, and 30% explicitly noted they wouldn’t base their vote on candidate presence. Non of the formulated hypothesis were entirely established – however there was a trend in the hypothesized direction. Results indicate that SNSs may be credible sources of information, but that the information available may not be fully utilized.


I have written – with great enthusiasm I must add – about Web 2.0’s history, positive and negative sides of collective intelligence, open source politics, social networking sites, Juba the Baghdad Sniper, digital era governance in the UK, Campaign 2008 in the US, trickle-up politics, blogging in Greece and self-censorship in China, still there is so much more to add. There are so many presentations I have left out, such as Stephen Schifferes presentation on citizen journalism, in which he remarked how young people get their political news from such programs as The Daily Show and that the visual material watched on the BBC website rarely is about politics (hence, if content is really up to the users then we soon will only be able to watch news on celebrities and nothing about the Middle East). Excellent points were also made by Mike Thelwall about reevaluating notions of blogging and the creation of Habermas’s free discussion Public Sphere, as all user-generated content was banned during the South Korea elections.

The conference truly presented a great deal of theoretical insight and exciting new cases, but unfortunately was too large to attain clear-cut in-depth conclusions. The attention seemed to be on the international character of the conference, therefore many of the parallel sessions were about cases in ‘restricted’ places such as Denmark, Istanbul, or Macedonia. Of course it is great to have a platform for a long tail of political case-studies, yet it makes it difficult to draw up unquestionable statements. Take for instance the topic of Political Blogs (I reviewed one presentation on this topic, there were several more – all concerning local politics), none of the talks really outlined what a political blog is. What makes a blog with political content different from editorials?

What I was hoping for were panels of experts debating a single topic (i.e. on blogging, surveillance, journalism, etc.), instead of having them one after another presenting their research results. I was hoping for lively discussions and active audiences. In fact Michael Turk says it best: “The speaker began by requesting that his presentation not be quoted without his prior approval. This reflects a larger trend that Micah [Sifry] and I have discussed here. This is a conference about web 2.0, that attempts to explore web 2.0 use by political actors, but completely fails to recognize the encroachment of the Internet and Web 2.0 on its own world. Almost none of the participants here are blogging. Before the first session Micah asked if anyone present knew of a tag being used for blogging the conference. To a person, everyone in the room stared at him as if a third arm had suddenly sprung from his forehead. For a web 2.0 conference, the participants are remarkably web 1.0 (perhaps even web 0.5).”


Everything you want to know about Geert Wilders Fitna… except the ending!

When Dutch crime reporter Peter R. de Vries announced he solved the Holloway-case and put together his findings, facts, and answers in a two hour film, he did so three days before airing the actual program. For 72 hours the Dutch public was held captive in front of their newspapers and screens. News was primarily dominated by talk shows and articles speculating about the films content prior to its broadcast. In the end the massive media hype resulted in seven million Dutch people staying home on a Sunday evening to watch a Cheech & Chong movie, with all the jokes cut out, being interrupted by commercials. Now the question is: what happens if, instead of three days, you announce a film three months before airing it?

As I am writing this article it has almost been four months since Geert Wilders announced that he is preparing a film which elaborates on verses from the Quran, showing they are still being used today, accompanied by documentary footage from the world of Islam, in a 15 minute “call to shake off the creeping tyranny of Islamization”. In the meantime the Dutch government has expressed great concern about the upcoming film release and has made emergency evacuation plans available to all its consulates and embassies worldwide. Also, Dutch Minister-President Balkenende initiated hardening security measurements around military installations abroad. It is feared that the film will lead to violent extremist Muslim protest such as previous protests against the Jyllands-Posten Mohammed cartoons in 2005. Some critics argue that this governmental involvement adds to the publicity of the film and possibly is the cause of its negative association. Wilders accuses Balkenende of succumbing to professional cowardice for capitulating to Islam.

Nonetheless, on March 6th 2008, the Dutch government raised its national terrorist threat level from the status ‘limited terrorist threat’ to ’substantial terrorist threat’ because it fears Muslim terrorists will launch attacks against European targets, with the film as one of the causes. Also Wilders received a substantial terrorist threat: a fatwa by Al-Qaeda, calling all Muslims around the world to assassinate Wilders in the name of Islam. In addition, various international relations have threatened to review its diplomatic stance with The Netherlands, should the film be aired. Leading to an investigation of the Dutch ministry of Justice to find out whether publication could be prevented, but this could not be done. Dutch law avoids censorship unless the content is discriminating. At this stage Fitna’s content is unknown.

Yet, Pakistani regulators banned YouTube for several days due to a “blasphemous” video clip believed to be a trailer for Fitna. Google eventually complied with the Pakistani protest and the material was removed. In their attempt to censor, Pakistan accidentally caused the YouTube site to be unavailable worldwide for hours. Moreover, on March 20th 2008, the American internet hosting provider Network Solutions took down Fitna’s website, replacing a placeholder image containing a picture of the Koran and the text, “Geert Wilders presents Fitna”, with a message asserting that complaints had prompted an investigation into whether its contents violated Network Solutions’ acceptable use policy. Notions of the Internet being a ‘free for all’, ‘revolutionary’ and ‘antigovernment’ distributed global network, should be reevaluated. Both YouTube and Network Solutions exemplify the hierarchical authority of control that exists in its decentralized design and the political pressure and power that allow manipulation.

Authority and control are even more evident in old media. Wilders negotiated about a possible broadcast of the film on the Dutch television. At this stage however it appears that no Dutch broadcaster wants to show the film in its entirety without interruptions and editing. Wilders has said that he would “Rather have the film entirely on the Internet, than half on television”. Fitna is a telling example of the conformist practice in Dutch television. The only tolerant Dutch broadcaster turned out to be the Dutch Muslim broadcasting network: the Nederlandse Moslim Omroep (NMO) offered to air the film, but insisted on an assessment prior to its broadcast, which Wilders turned down. I was enthuses when I read that the NMO proposed to show Fitna in its entirety. This could solve all problems. Not only are all bases – from a political perspective – covered; it would have been a beautiful gesture from both sides, hinting at compassion and forgiveness.

Perhaps one could say the conservative structure of television represents contemporary bureaucracy. On the other hand, Fitna demonstrates the emancipating and mobilizing quality of media. Numerous petitions are distributed via Internet channels, various artists have created ‘counter-films’, and the widespread critique of the (unseen) film has spawned protest actions including a protest of 1,000 people in Dam Square in Amsterdam. People gather together allowing the streets and media to become a platform for their neglected voice. Whilst governments are repressing masses by elaborating on increase of threat, religious conflict and censorship; there actually are people who consider Fitna to be an inappropriate political expression for a politician in a country with a multi-cultural population.

When searching for Geert Wilders Fitna on YouTube, you will have a difficult time missing hundreds of unique clips with the word “Sorry”. Inspired by an apology project done in America (concerning their President), Amsterdam-based Mediamatic mobilized professional and amateur film makers on YouTube in an attempt to show the world Holland is not solely inhabited by bitter angry Wilders clones, but flooded with artistic lovable people. And they are sorry. Sorry for the commotion, confusion, and it will never happen again….

But, should we really be sorry? I mean isn’t Fitna a brilliant new media case-study? The announcement to make a film for television and Internet has resulted in a multi-media hype, a demonstration of online and offline mobilization, and has spiced up contemporary debates concerning distribution laws, internet freedom, security, global politics and ‘impactology’. No doubt in the near future Fitna will be a cuisine for many hungry scholars (in domains of media, law, politics, sociology, cultural anthropology, religion studies) allowing them to obtain their Master and Phd degrees…… thanks to Wilders.

(Thank you?)

The Mobile City Conference

On February 28th 2008, the Netherlands Architecture Institute (NAi) organized The Mobile City conference, in collaboration with the research programs ‘Mew Media, Public Sphere and Urban Culture’ (University of Groningen) and ‘Playful Identities’ (Erasmus University Rotterdam). The conference concerned the interplay of physical and digital spaces, and the influence of locative and mobile media on urban culture and identities.


As I entered the spacious hall of the NAi, the first thing that caught my eye was a table filled with Lego. The colorful interlocking plastic bricks and accompanying array of gear, figurines and other parts stand for imaginatively exploring scenarios and possibilities in a serious form of play. Contemporary cities are the realization of a vision that was once upon a time played with, perhaps even on a table filled with Lego. Similarly, Locative Media could be seen as a modern form of serious play, fostering creative thinking, as users build metaphors of their identities and experiences using new media technologies within a presented scenario. On the one hand, Locative Media offers new tools for designers to envision future planning; on the other hand designers will have to think differently about cities as the technology implicates mobility, practices of everyday life, politics and aesthetics.

Architecture, as Ole Bouman (director NAi) emphasizes in the opening speech of the conference, is a pillar of society in which a shared heritage is stored. The building of the NAi, for instance, is shaped around numerous proposals, political decisions, and cultural trends. In this sense one could say buildings are representations of the zeitgeist, and the city is an archive that preserves an intellectual, civilizing, cultural and political tradition. Cities present scenarios that tell a story about its makers and users. Moreover, they show us where we are, and possibly where we are heading as a society. Architects help us locate and provide a shelter, a form of enclosure and a home. It is the duty of architects to secure and carefully maintain this task. Currently digital technologies continue and aid this mission; network technologies allow archives to be accessible all over the world, and security technologies protect citizens from potential threat. However, as Marshall McLuhan aptly states in Understanding Media (1964) “We become what we behold. We shape our tools and then our tools shape us.” Current trends in placing CCTV cameras, tracking and tracing technologies throughout urban space, may soon reach a point from which we cannot return, after which technology imprisons publics and affects who and what we are. Instead of brick facades our future cities may be thought of as glass cages where everyone is watching one another.

For some, paranoid theories are merely entertaining narratives. Nonetheless, there is no denying that digital technologies implicate notions of urban space and place. So what can really be said about the merging of physical and digital ‘reality’? The term ‘Locative Media,’ initially coined in 2003 by Karlis Kalnins, seems to be appropriate for digital media applying to real places, communication media bound to a location and thus triggering real social interactions. Locative Media works on locations and yet many of its applications are still location-independent in a technical sense. As is the case with digital media, where the medium itself is not digital but the content is digital, in Locative Media the medium itself might not be location-oriented, whereas the content is. Thus wireless and mobile media have re-introduced questions of space and place. Cyberspace and the so-called ‘real world’ converge into what Lev Manovich called ‘augmented reality’, and in this ‘augmented reality’ it does not matter where you are. On the other hand, the technology lends itself to surveillance and control, thus in the end it might be important where you are. The network in most instances might be invisible but can you remain out of sight?

Malcolm McCullough (Associate Professor University of Michigan) opened his lecture concerning ‘urban inscriptions’ by saying Holland feels equally homely as Michigan. As a citizen of the mobile city, Malcolm belongs to multiple places and communities. Our daily life is increasingly managed in mediated ways. However, layers of information have long been part of urban density. Locative Media let us combine these mediations with organizations in space. That in turn combines many senses of the word ‘architecture.’ However, Malcolm added, Locative Media are not as new as the hype might make you believe “layers of information have long been part of urban density, and their applications are not just in way finding.” Malcolm’s talk touched upon four main themes in relation to urban markup: history, derive, advertising and ambiance.

Inscriptions give cities more than an aesthetic character; it also sets cities apart from one another. In fact, if every city were the same all would be boring. It takes great cities to make traveling from point A to B an exiting experience instead of a tedious trap. “Whether as grand expressions carved in stone facades, mundane signage in the streets, or the various props used by communities of practice, an information layer has shaped urban experience.” Now that layer intensifies. Much as electrification did for power infrastructure a century before it, pervasive computing brings mobility, precision, personalization, and embedding to urban annotation. On a street level, participants build up an invisible information infrastructure, something that could be referred to as urban computing. At the same time the building environment is shaped as a media platform, the rise of corporate media related skyscrapers function as landmarks in the city. The KPN building for instance is a familiar sight in Rotterdam that functions as an orientation point as well as an attraction.

Dérive is a notion used by Guy Debord in an attempt to convince readers to revisit the way they looked at urban spaces. The concept means to aimlessly walk, or drift, through the city streets being guided by the momentum and space itself. The basic premise in Debord’s theory of Dérive is that people are trapped in the practices of everyday life, by looking at the city by following their emotions they can break with their daily route, routine and enclosed space. Cities in fact are designed in ways to direct and control its publics. Cities are complex structures in which movement and mobility is managed by its plan, for instance road signs tell one where to go at what speed and where to not go between what times, when to stop and when to continue. But also the architecture controls the flow of people by means of the way in which certain areas, streets, or buildings resonate with states of mind, inclinations, and desires. Debord argues that people should explore their environment without preconceptions, in order to create a better understanding of one’s nature; as one becomes aware of its location, one can value and comprehend his or her existence. The idea is that people built forth from their insights and seek out reasons for movement other than those for which an environment was designed. Bringing an inverted angle to the world can make people assign new meanings to familiar places, produce new forms of social interaction and make public space a place where one stops to look.

Locative Media technology and artistic practices may assist publics to gaze in new ways and offer possibilities for social interaction. However, Malcolm noted, the city is increasingly getting polluted with advertising. Contemporary cities are taking the shape of a spectacle as public spaces are bombarded and overloaded with images, messages, art, signs, texts and ads. Moreover, everywhere media facades erect and our streets, the public stage of political movements, theater, playing children and social contact, are increasingly becoming virtualized with electronic screens and projections, taking away the public function of open space. Over the last decades our public space has gradually more been privatized; streets, squares and parks are more and more covered with brands and logo’s; public domains such as schools, universities, and libraries are ever more dependent on corporate sponsoring and turning in a shopping mall variant; public transport such as busses and trains are equally being privatized and transforming into mobile billboards. GPS enabled technologies might continue this trend.

GPS enabled wireless devices, such as ones cell phone, currently allow for personalized data to be sent to its user in the form of Location Based Services. When a personal profile – set up by the user via a survey, or based on the user history – is coupled to a pool of other profiles supplied by other users, then statistical algorithms can suggest other likes and dislikes, based on the similarity of ones profile to other user profiles in the pool. Consequently the user may be sent information when he or she is in near proximity of the product or service that is in accordance with the user’s taste. Yet, the question remains: “how much information is pollution?” In fact, in the past, Malcolm adds, broadcasting ads via radio was controversial (1920’s). Nowadays there are speakers blasting commercials in public spaces. Where is this leading? How much more?

In Sao Paulo there is a ban on outdoor advertising, including billboards, neon signs, and electronic panels. On January 1st 2007 this city of 11 million, overwhelmed by what the authorities call visual pollution, pressed the “delete all” button to offer its residents unimpeded views of their surroundings. Transforming the landscape goes hand in hand with a change in culture. Contemporary culture seems based on marketing and ads. Yet, I cannot stop thinking how dull, grey and maybe even unsafe Time Square would be without its flickering neon lights (or projections of yourself). On the other hand, it is different being a tourist on Time Square as it is being a resident. So shouldn’t place based media make public space more ambient? In ancient times way points and milestone were introduced to the city to make travel more convenient. Orthographic mapping came in the 15th century; fly posting and city street signs in the 19th; and in the early 20th century travel guidebooks, electrification and street lightning made the city more secure, itinerant and mobile. Currently tracking and tracing technologies offer new challenges in urban mark-up, they may make everyday life more pleasant, yet it can enhance annoyance as it increases information junk. When placed on a time line, public advertising is a relatively new occurrence. The privatization of public space and the bombardment of one-way corporate messages have a disturbing effect on ambiance, and add even greater urgency to the belief that concentration of media ownership has successfully devalued the right to free speech by severing it from the right to be heard.

Why do artists choose tracking and tracing technologies in their works? How did they get started? Are there socio political statements announced through Locative Arts? Can Locative Arts be used to push technological progress?

While studying graphic art and mixed media in Utrecht, Esther Polak used a compass to help her navigate through the city. Her bad sense of direction had always been a frustrating factor in her everyday practices. When GPS and navigation technology developed to a consumer friendly affordable level, Esther started mapping her routes. The first time when see saw a route being visualized was after a sailing trip she took with friends on the Lauwersmeer. Not only did the map show the route of the boat it also illustrated the shape of the lake. Analysis of their sailing technique explained the wind direction. Moreover, the map pointed out decisions they made during the trip, decisions that tell a story, for instance when and where they sailed to shore in order to have lunch.

minard, afb 2.pngThe notion of maps telling a story makes me think of The Russian campaign of 1812-1813 by Charles Joseph Minard. It shows the troops of the Napoleonic army on a two-dimensional map as they advanced to Moscow and retreated towards Poland again. The thickness of the line is proportional to the survivors at that moment. The lower graphic shows the temperatures.

amsterdam realtime.png

Esther decided to take exploring the visual and documentary possibilities of GPS one step further with her AmsterdamREALTIME project (2002 in cooperation with Waag Society, Jeroen Kee). Ten inhabitants of Amsterdam carried a GPS tracer with them for one week. Their routes through town were made visible on a projection screen in the exhibition space. The traces on screen form an alternative, highly personal map of the city. The maps provide visualizations of every
day routes individual participants take, and in a way the routine they are stuck in, but above all they tell a story about the user. When one of the participants was given his personal map, which only had a few short lines – the man only walked around one corner – his reaction was ecstatic, he told Esther he was going to keep the map for his grandchildren.

This and the encouraging international interest inspired Esther to develop the collaborative MILKproject. In this project a European dairy transportation was followed from the udder of the (Latvian) cow, to the mouth of the (Dutch) consumer. All people who played a role in this chain received, for a day, a GPS-device that registered their movements. Again the results put forward fascinating stories. A Christian farmer, for example, drove the milk to a supplier via an ‘unintended’ detour. When he was confronted with the diversion his father asked at the top of his voice: “What are you doing there?” If it was a woman he was meeting there remains unsolved, nonetheless the map again presented a storyline. Currently the project has found its way to Nigeria. NomadicMILK records and visualizes the routes of both nomadic herdsman and regular dairy transport. NomadicMILK differs from MILKproject in that it makes use of a newly developed visualization tool: a small robot draws the tracks directly on the ground in lines of sand, allowing the tracks to be shown to the Nigerian participants along the road. Again the outcome offered remarkable anecdotes. One of the participants pointed out at what point in the track his wife had left him for another, making others wonder why he still walks that route.

The way that we experience urban place and the built environment is defined to a large degree by the places we go for social encounters, meeting: the places we go to work, for consume, to learn and for entertainment. James Stewart (Institute for the Study of Science, Technology and Innovation in the University of Edinburgh) observed an occurring alteration in these places: meetings are now less constrained to offices, shops and fixed points of service, and can take place in a range of environments, in particular the rise of branded places: coffee houses, transportation hubs, customized meeting places, and informal, locally branded spaces that attempt to offer a quality of environment for all sorts of meetings. There is a domination of branding, and consequently corporations shape publics. Brands say something about ones identity, location and milieu. Meeting with friends at “Starbucks” is different then meeting them at “Wendy’s”. In this context Starbucks stands for a mature social environment and the latter represents an adolescent (and perhaps even anti-social, as fast-food restaurants are designed – with sterile white lightning and uncomfortable chairs – to have people spend as little time as possible behind their tables). Therefore, James argues, it matters where people are when they meet, that branding is an important aspect of place, and that technologies have a major role to play in mediating brand and meaningful human interaction. In order to test this hypothesis James and his team are currently running an experiment that entails volunteers logging into the locations they visit by taking a picture of the place. Their Facebook friends are informed of where they are by text and WAP. You can try by adding the ‘BrandedApp’ on Facebook. This experiment investigates Virtual Presence, the images that people choose to represent where they go, and the linking of the ‘status’ concept in online Social networking, with ‘logging in’ to physical places.

Thomas Engel (The Saints, a Amsterdam-based company developing content for mobile devices) took the opportunity to plug a 3,5 minute NavBall commercial – that includes product placement for Heineken. Thomas argues NavBall, a team-based soccer game played simultaneously on 6 to 22 mobile telephones, is the logical result of trends in mobile content, outdoor gaming, and the upcoming European soccer championship (summer 2008). Whereas many of the discussed project presentations were about rediscovering the city, education, or establishing social encounters, NavBall seems primarily centered on fun. Players are focused on their screens more then the physical surrounding, moreover, the physical surroundings act as an obstacle that players have to work around. Of course exiting sociable encounters and discovery of the city may occur, however this seems secondary to engagement in the game.

games atelier.png

Games Atelier on the other hand, aims to use the urban environment as inspiration, context, and a trigger for participants to co-design, actively play and share their results. In Games Atelier, Ronald Lenz (head of the Locative Media research program at Waag Society) explains, students from secondary schools are invited to create and play their own location-aware mobile games. Central in the game is the theme ‘citizenship’. Games Atelier evolved from four earlier projects developed by Waag Society (an Amsterdam-based medialab which researches how creative technologies can lead to social innovation in education, culture, healthcare and the public domain). It all started with the previously discussed AmsterdamREALTIME, which opened up new possibilities for playing with tracking and tracing technologies. Four years later in 2005 the Locative Media department of Waag Society developed a mobile learning game named Frequency 1550, in which “students are transported to the medieval Amsterdam of 1550 via a medium that’s familiar to this age group: the mobile phone”. Frequency 1550 took place again in June 2007. The game uses 3G cell phones and network to allow students to compete in finding answers to questions about the old city of Amsterdam, for history class excursion and assignment. Frequency 1550 explores the social potential of location-aware devices, inspired by the use of tracking technology and wireless media, human relationships, movement and identity, the project seeks to extend and re-appropriate the functions of locative technologies by exploring ways in which they can be socially constructive and facilitate new dynamics to occur within everyday school life. Children are taught to look beyond city facades, interact socially and technically, and move through the city in new ways.

The possibilities of Locative Media and studies in serious gaming have triggered interest from the academic community. In 2006 the University of Amsterdam and the Hogeschool van Amsterdam collaborated with Waag Society to develop a mobile package that allows students to use the urban environment as a source of knowledge, information, and study. The Mobile Learning Game Kit (MLGK) consists of a piece of mobile equipment allowing data to be collected. It displays topographical models of urban networks in such fields as culture, politics, and economics. The MLGK enables users to independently observe, analyze, and present data. The MLGK and its content are developed by users themselves and transferred to new users.

The concept of sharing and playing advanced in Waag Society’s recent project named 7Scenes – a community platform for multi-user real-time gaming with mobile and location-based technology. Waag Society considers 7Scenes one of the first Web 3.0 application platforms. With Web 3.0 they point to a future where the Internet is truly connected to our physical world. So how does 7Scenes differ from (Google Maps) mash-ups? “Of course there are some similarities – we all create layers on top of the world. Both 7Scenes tries to go beyond that focusing more on creating scenario and rule-based location interaction and broadcasting: visualizing scenes (live).”

These projects are not only about concentrating context in a coordinate point, nor are the projects merely about gaining greater understanding of place through the cell phone screen. Frequency 1550, The Mobile Learning Kit, 7Scenes and Games Atelier are not museum or digital touring guides; the focus is on opening up spaces of play through which context may be discovered. Moreover, local and otherwise hidden places possibly will get noticed.


Laurence Claeys & Marc Gordon (senior researchers at Bell Labs-Lucent, Antwerp) moved the broad research of the use of the touch-paradigm to interact with things forward into different test cases concerning the relation between home and city context. Their projects aim to empower users to stage, participate in, engage with, and experience media on the ‘cross-reality web’. One of the SmartTouch projects involved a ‘Do it yourself city experience kit’. Another was an album in which city souvenirs may be collected, conserved, collaged and shared with. In their research Laurence and Marc observed a long tale of creative initiatives in the city (the city mass is most dominant and the sociable city is in between). There are a lot of creative proposals but attention is given to commercial initiatives. So, the problem is who will fund people playing around the city?

Currently Locative Media projects receive large academic and commercial funding. Various theorists assert the history of media to have moved from analogue to digital to virtual and now to locative. Commercial ventures are interested in Locative Media projects for their experimentation value. The city is used as a testing ground for technological applications, usability and reliability. Christian Nold (a London-based artist and lecturer) observed that Locative Media do not exist anymore as a community but has splintered into a number of different directions; on the one hand Locative Media can refer to the technology (mobile devices connected to the Internet, security equipment and navigation technology), on the other hand it refers to outdoor gaming and art practices with the use of portable location aware devices. What has emerged, Christian states, is a strong focus on audience and the specificity of place. Christian believes Locative Media should focus on gathering, sharing, playing, visualizing, imagining, contextualizing, archiving and meeting educative challenges. Locative Media has a decentralizing value that allows social spaces to be opened up and empower communities.

Locative Media also allows publics to experience familiar places from an alternative perspective. In Sensory Deprivation Mapping participants were deprived of sight and hearing and asked to roam the city in order to create a map based on other senses. The result is a map based on how fresh the air is or how windy certain areas are.

Locative Media has a quality to bring the local, hidden, repressed and silent to the surface. People are constantly bombarded with signs (ads, road-signs, neon lights, screens, facades) when traveling through urban space, making it difficult for certain places to stand out, places that might tell an interesting or important story. In one of Christian’s projects people in Stockport (UK) were asked to draw emotional arousal in relation to their geographical location in the town. The Emotion Map of Stockport had one striking detail: the Mersey River – which flows through the town center – was not represented. The participants had drawn the shops next to it, some were not even aware there was a river. Emotion Maps can point out suggestions for improvement. In the case of the Mersey River Christian suggests that there is a whole range of cultural and physical interventions that could allow people to re-engage with the river, such as canoeing trips under the Merseyway, marking the course of the river in the street or drilling spy-holes through the road surface to allow people to see and hear the Mersey.

Maps can be used to form communities to adjust the community. We should move away from the fantasy of the mass; the public is you and your friends. Christian has observed a change in publics; communities are becoming more important and the local is being recognized. The alteration of the notion of the public will change the method of advertising too. Advertising as we know it is designed to reach the mass, this will disappear when the transition to community emphasis continues: people do not want to advertise to their friends because for most advertising is considered an anti-social act.



Before a panel of experts took the conference stage Anne Nigten (Lab Manager V2_Institute for the unstable media, Rotterdam) presented StalkShow, a project done in collaboration with media artists Karen Lancel and Hermen Maat. StalkShow deals with the threat of insecurity and isolation in public space. It invites the audience to give this threat a personal face and space; to show both its horror and its beauty. A performer carries an interactive wearable billboard containing a laptop with a touch screen around these spaces. People are invited to touch the touch screen and to navigate through texts about the threat of insecurity and isolation. Connected to the backpack is a webcam that shows the ‘intruders’ face on the screen (and in 2006 at ArtInPro, Moscow, on an urban screen). The act of touching and being in contact with a stranger’s personal space is for most people, Anne says, a creepy experience. There is an invisible barrier to overcome, and most ‘victims’ felt as if they were performing a forbidden act. What the project tries to display is that, like the technology used, the zone of intimacy is shifting to a more pervasive intrusive one. The question becomes: who is stalking whom? The act of making contact with someone from the back, instead of face to face, enhances the whole idea of being watched. The touch brings it to a close. After a series of lectures and presentations that mainly concerned the playful and innovative character of Locative Media it was interesting to see a project that highlighted the augmentation of surveillance culture.

The panel discussion that followed touched upon various themes, but was dominated by the fear of advancing and promoting privacy intrusive technologies. The panel contained four experts from different domains, allowing the topics to be discussed from diverse perspectives. Rob van Kranenburg (head of Public Domain, Waag Society) is involved in negotiability strategies of new technologies, predominantly ubicomp and RFID (radio frequency identification), the relationship between formal and informal in cultural and economic policy, and requirements for a sustainable cultural economy. Marc Schuilenburg (who teaches at department of Criminology, VU University Amsterdam), co-author of “Mediapolis: Popular Culture and the City” (2007), is mainly focused on the risk society. Joris van Hoytema (BBVH Architects & Multimedia) worked on “Baas op Zuid”, an architecture game in which players can design their neighborhood, and make decisions that directly influence their environment. Nicolas Nova (researcher at Media and Design Lab, Swiss institute of Technology, EPFL) is researching gaming experiences (location-based applications, ubiquitous computing) in mobile/urban contexts.

Rob observed an increase in agency, yet with it there is a lack of unpredictability and poetry. Performance artists (happenings, Yoko Ono, situationists) play with the certainties and expectancies people find in everyday life. These certainties are increasingly being formed by technology. Artists should be encouraged to again wake people up from the routine of everyday life. Our life, Rob adds, is increasingly dependent on technology. Of course technology is able to take away tedious activities, yet the more we (the West) outsource our activities to technology the duller we become. “In Delhi they can still fix their car! Just imagine if here in the West the hardware breaks down…”, Rob shouts. Our fixation on technology and outsourcing of activities is dangerous, it may lead to a militaristic society; Napoleon was able to conquer Europe on a horse, just imagine what he could have accomplished with a mobile phone.

On the other hand, Joris interrupts, technology is able to assist participatory decision-making to take place from local level. In relation to his own projects Joris believes the technology is used for good. The technology motivates people from the same neighborhood to get in contact with each other. Furthermore, it provides them with a chance to partake in urban planning (for instance by means of referenda, allowing residents to speak out their need for more green, parking space, youth facilities, etc.). Nicolas understands the positive aspects of location-based media, but underlines the difficulty in motivating people to use it. It is difficult to mobilize people, currently there is not enough awareness, and moreover, people do not know how to communicate with each other. “I don’t know the email address of my own neighbor” Joris says. “So why doesn’t every place get its own email address?” Marc agrees with the capability of Locative Media to assist notions of collective intelligence (following Levy) and smart mobs (following Rheingold). What this brings, Marc adds, is a break off from autonomous creations; design and production is a collaborative process, every creation is build forth from previous ones, and moreover, spaces are increasingly being capsulated, supporting community building, which add to shared production. “There is no more genius” Marc states, “the ‘senius’ is the new genius”.

Marc believes our attention should focus on citizenship, which, as the word in French articulates, is bound to physical space. Our spaces are increasingly being capsulated (museums, plaza’s) and guarded by mobile media. Currently urban places are more and more being privatized, and this adds to the increase in security techniques. Office buildings, commercial zones, and adverting space are kept safe by mobile media (iris scans, mosquito’s, code). In fact what this brings are enclosed spaces. Hence, technology not only unites, it also divides! Therefore, concentration should be more political; continuation of privatization and commercialization bring forth a city under corporate control.

Nicolas agrees with notions of a gated community, yet believes the focus should be on designers; they facilitate serendipity, assist the discovery of new people, and create awareness of local identities. Rob counters by saying the role of designers at present is only limited to the work that has already been done. Their input and decisions are restricted to what color something should be or what visual shape something needs to have. Therefore designers should be included in the early stages of the process. Rob proposes a level down participation. “Currently the public is only given security cameras, mosquitos and other controlling technologies, if we don’t stop this now we can never get back!” Rob adds loudly. “Designers should see how they can give trust”, Marc replies, “right now the lowest risk factor is ‘low risk’, there is no ‘no risk’. Again the focus should be more political”. “There is no way back”, Rob states, “you can walk around the city covered in aluminum, but you can only keep that up for a few days”.

Architecture encloses, occupies, it is for the people, stands for values, and is hatched in stone. Architecture covers privacy, security, property, gives individuality, and representation. There are optimistic and pessimistic ways of looking at technological advancement. Ambient, ubiquitous or locative media, like all new technological systems, tend to become hidden and disappear at precisely the moment that they become important. They weave themselves in the practices of everyday life. Infrastructure is embedded, transparent, temporal or spatial reach or scope, is learned by its users, and is linked to conventional practices (e.g. electricity). From the cerebral discussions, the project presentations, and questions from the audience it can be concluded that there are three domains which technological expansion and the ‘hybridization of space’ influence: consumerism, militarism/security, and urban activism (art + activism). Technology is placed in the city with good intentions, but now the architecture is there for control. Database coupling and searches result in that we have become statistical persons. Political choices establish how algorithms are determining networks; it can be used commercially, militaristic, or for reformation.

The topics moved beyond city architecture and touched upon urban culture and identity. Moreover, it questioned the interplay of physical and digital urban spheres in an age of mobile media. The conference organizers Martijn de Waal and Michel de Lange put together an outstanding event. The conference was formatted in a clear and well thought through way; after a broad theoretical overview of both architecture and Locative Media (Ole Bouman and Malcolm McCullough) a series of practical projects set the scene in contemporary urban culture, after which a panel discussion analyzed current transitions from differing professional perspectives. The conference was wrapped up by Stephan Graham (who is discussed in more detail in this Masters of Media blogpost). In the end, I left the NAi a little worried and a whole lot wiser.

Online Video Aesthetics – Video Vortex Conference review

To much disliking of my parents, as a kid I frequently would watch low budget television programs based on audience generated video fragments and unscripted pranks. These programs included the popular America’s Funniest Home Videos and candid camera shows such as Candid Camera. The first thing I remember about these programs is the very bad quality of the (mostly 8mm) picture, also the corny dubbing and the forced laughter spring to mind. Pretty much all the videos broadcasted on the show worked according to a simple formula, within a 10 second clip an unexpected event interrupts normality, if you’ve seen one you can guess with much certainty what will happen in the next. The popularity of these television shows has moved to the Internet, mainly YouTube; a user generated platform containing a wide variety of home-made video clips, eyewitness reports, webcam diaries and candid camera pranks. The worldwide appeal of YouTube (currently with the exception of Turkey) has made the site exceptionally attractive for early musicians. In previous posts on this blog (also here and here) I have written about the promotion function of YouTube and its role as conservator of artistic production. YouTube has become a medium and platform in itself for art works and with it has given way to many marketers to exploit its function for advertising. Much has been written about the copyright infringements with concern to YouTube and its quality to provoke, harm and cause controversy. However, the aesthetics of these online videos has not received equal attention.

My first thought is to draw parallels to the old media formats, such as the previously mentioned television shows. Yet again what jumps to mind when picturing a YouTube video is the appalling picture quality. In the 80’s and 90’s user generated video content was often distinguishable from professional film because of its inferior aesthetic value. With the commencement of mass produced cheap digital camera’s and consumer friendly editing software packages one would expect the barrier between amateur and professional to vanish. Yet, YouTube seems to be a homogeneous style that mainly builds on eyewitness TV, candid camera formats and webcam diaries, moreover, the video quality is – just like its predecessor – second-rate. Mass produced lenses and technological advancements have done nothing to increase visual appeal. A logical answer is that whilst recording and editing techniques have highly developed; streaming, rendering and storage capabilities are still at an early stage of progression. YouTube converts uploaded videos into Flash, a low resolution codec, therewith making the pictures look cheap and unattractive. Currently video streaming platforms such Joox.net are experimenting with high resolution codecs such as DivX encoded films, which with their higher frame rate are considerably bigger in file size, making them more difficult to store and buffer (thus stream) with an average connection speed. However with technological improvements in storage capacities hard-drives and flash-disks have become incredibly cheap. Hopefully internet speeds, namely in developing countries, will increase too.

On the other hand video screens are becoming smaller. Physical screens are reduced in size as more and more media devices become portable. Also webplayers are smaller than the screens they are viewed on, mainly to compensate for the low resolution caused by its coding and to maintain a comfortable buffering time for its users. In the time when I was watching the above mentioned television shows, music videos were a branding tool for musicians and video artists were paid extensively for blockbuster like clips. Hard to believe budgets were spend on a 3 minute lasting slick eye-catching visual waterfall. Mark Romanek’s 1995 video for Michael and Janet Jackson’s “Scream” is considered the most expensive ever, at an estimated $7 million. Nowadays the music industry is, as they proclaim themselves, in crisis from falling sales due to file sharing and loss of brand control. MTV, the once upon a time music channel pioneer with 24 hours around the clock videos, is pretty much only broadcasting real-life soaps, pranks, candid camera shows and video diaries. The stage for musicians and video artists has shifted to YouTube, i-Tunes, and various other networks. Consequently labels have less to spend on elaborate videos like those made by Spike Jonze, Michel Gondry, Romanek and Fincher, making ambitious videos an exotic species facing extinction. The dinosaur era of videos has made way to videos that will mostly be seen in miniature on computers. The result, as the Associate Press putts it, has been a major shift in the art form, as artists increasingly embrace the YouTube aesthetic with cheap, stripped-down, low-production videos. Directors have to adapt to the smaller-sized medium. “The new aesthetic is that it’s very low-budget, lo-fi, very do-it-yourself, not at all dedicated to the old style of music video which was always bigger and louder and more explosions and more money,” says Saul Austerlitz, author of Money for Nothing: A History of the Music Video from the Beatles to the White Stripes. “This is more a punk-rock aesthetic,” he adds. “It’s very exciting.” So now that music videos increasingly resemble video art, can we define how artistic practices influence the look of online footage?

Video Vortex - Photo by Anne Helmond

Andreas Treske - photo by Anne Helmond

During Video Vortex, organized by the Amsterdam based Institute for Network Cultures, filmmaker Andreas Treske talked about the alteration in viewing conditions and therewith a change in viewing experience caused by the composition, aesthetics, and cinematic rules and practices of film. As screens become smaller artists focus more on close ups to bring the viewer closer; consequently there is an emphasis on gestures and details are blurred out. The language of cinema is applicable in a reduced form. The iPhone offers viewing possibilities of full-scale films that are (still) edited for cinemas, however, engagement is lost as small screened devices are particularly used in transit. Sergio Leone’s Once Upon a Time in the West (coincidently my favorite film) could to some extend work quite well on a portable device. Leone made heavy use of close-ups, the emphasis on gestures, such as eye movement and accent on detail, allowed the audience to be drawn closer, engaging in a tense play of focus, blurring out the background and stressing on the root of what is at stake. However portable devices, such as the iPhone, are used ubiquitously, which is different from cinema, as cinema is framed according to a set place and time, making watching 175 minutes of Once Upon a Time in the West very difficult; engagement with the film is lost as one will also engage with the device, the surrounding, and the physical and social activity one performs (paradoxically this urban – and to a lesser extend domestic – surrounding is increasingly being filled with bigger screens). As the attention span is short, the composition of online videos is undersized and to the point. Hence with the compression of images/screen there is a compression of time and place.

Stefaan Decostere - Photo by Anne Helmond

It is therefore inevitable that we study new methods of impact and discover new ways relating it to video, says documentary maker Stefaan Decostere. In fact we should be thinking about an academic field of Impactology. Impact differs from effect and affect in that it is measurable and generates more impact. The OK Go video with four guys on a treadmill that plays millions of times on YouTube had a huge impact on its audience, in the sense that it was refreshingly fresh and inspired audiences to make and share their own videos. Many artists and directors are now creating videos knowing they’ll have to compete for eyeballs on YouTube. OK Go’s famous treadmill-choreographed video for “Here It Goes Again” was perfectly suited for viral distribution, but the power pop band is far from alone in its reconsidered methods. Wouter Hamel’s Don’t Ask video consisted of a compilation of YouTube videos in which his song was lip synced. The Decemberists and Modest Mouse both asked fans to fill in the background to a video shot in front of a green screen. Last year, Death Cab for Cutie sponsored professional videos for each of the 11 songs on their album “Plans.” For his album “The Information,” Beck personally created a video for every track. The silly, lo-fi videos, which ranged from puppet versions of the band to someone dancing in a bear mask and poncho, were posted on YouTube and many copies of the album included a bonus DVD (source: AP). The bombardment of images makes artists constantly busy in finding new methods of impact. Can we delay impact? Route around it? Stop it? Television does not have the impact anymore. Not only have audiences become “too smart” for the tricks played out on them, television is associated with scripted formats and this no longer appeals to audiences who seem more interested in individuals, real life characters, and unscripted spontaneity. This might explain MTV’s shift to around the clock real-life soaps, pranks and diaries. Unscripted videos are about the individuals and less about the author/director, making it suited for the individualistic mentality present in contemporary western society. The “i” in iPhone, and the “You” in YouTube pretty much stand for the take on individuality and diversity. However, one might ask, how come all this focus on diversity produces forth a pool of homogeneity; standard formats of amateur repeats?

Helen Kambouri - Photo by Anne Helmond

Helen Kambouri, researcher at the Kekmokop Institute in Athens, argues there is a tendency for Greek videos to contain a repetition of semantics, where bodily movements are persistently recurring. To exemplify her observation Kambouri turns to a violent video on YouTube that gained celebrity status in Greece. The video shows a local police station in Athens where two (supposedly) illegal Albanian immigrants are being told to repeatedly slap each other, which they do. The Greek police officer giving orders, who later states to have acted out of boredom, is also the director of the video. The violent video is recorded on a mobile phone and circulated via MMS, after which a Turkish blogger posted the clip on YouTube. The film has received, mainly because of its violent premise, much attention and is amongst the top YouTube hits in Greece. Kambouri says there is a difference in the (effortless) repetition on new media channels such as the mobile phone and YouTube from the repetition of television economy, which is based on transcription. There is no linear matter of storytelling but a repetition of semantics, similarly a video of a prostitute shows a woman constantly making the same hip movement over and over again. Complex narrative has made way for simplistic emphasis on the premise; the Greek police video is an individualistic project, where a violent act is distributed publicly for the purpose of confirming the role of its maker as that of a director in charge of what is being recorded.

There are, as mentioned earlier, parallels one could draw between online videos and user generated content (e.g. home videos) via television, but one could even go back further; Charlie Chaplin’s slapstick. Often slapstick segments are short, bad quality, and repetitious. Chaplin’s films are still being aired and repeated through various media (cinema, television, VHS, DVD, online networks). Many of the slapstick films are directed and acted out by Chaplin; an individualistic project. The narrative of slapstick films are a repetition of semantics, instead of a linear story. However, the focus on contemporary videos seems to be on the unscripted nature of the sequence of events. The bigger impact of online videos on its viewers can be related to its close relation with reality; authentic is more shocking than fiction. What YouTube and sites allike demonstrate is that authenticity can be portrayed best when it is aesthetically amateur, gonzo, lo-fi, raw, rock and roll.

Patricia Pisters - Photo by Anne Helmond

Patricia Pisters, Professor of Media Studies at the University of Amsterdam, commented on statements made concerning television’s loss of impact and YouTube’s success, by referring to Ayaan Hirsi Ali’s controversial film Submission, which was screened only once on television and caused much commotion because of the huge impact, however, when it was repeated via YouTube there was hardly any fuss, it had no impact at all. Hopefully this will also be the case when Geert Wilders puts his movie about the Koran on YouTube, as he intends to do.

Geert Wilders - Photo taken from NOS

Shaping Space: Locative Media in a City under Corporate Control


Download essay in PDF here.



We become what we behold. We shape our tools and then our tools shape us.

– Marshall Mcluhan, Understanding Media (1964)

Let me start with a story about a joke. In 1996 Dino Igancio, a San Francisco artist, started the ‘Bert is Evil’ website on which he posted photographs confirming that Sesame Street’s Bert is evil.[i] The images showed the muppet next to notorious people and in famous historical scenes. The photographs were meant as a joke; the muppet was inserted into actual photographs using Photoshop. After a while Ignacio stopped producing new pictures, however a community of ‘Bert is Evil’ enthusiast had already emerged which continued posting new material from all over the world on several mirror sites, including an image of Bert interacting with terrorist leader Osama Bin Laden. Meanwhile in Bangladesh, Mostafa Kamal – a production manager of Azad Products – picked up an image of Bert from the web after scanning the web for Bin Laden pictures which were to be printed on anti-American signs, posters and T-shirts. The company printed 2000 posters; “we did not give the pictures a second look or realize what they signified until you pointed it out to us,” Kamal would explain to the Associate Press later. CNN reporters recorded the unlikely image of a mob of angry Pakistanis marching through the streets waving signs depicting Bert and Bin Laden. American public television executives spotted the CNN footage and threatened to take legal action, saying the people responsible should be ashamed of themselves; “we are exploring all legal options to stop this abuse and any similar abuses in the future.”[ii]


The story aptly illustrates how de-territorializing technologies assist distribution, mobility, reproduction, and community forming. Moreover the story is a fitting example of conflicting artistic and cultural perspectives, textual interpretation, and institutional authority. Consumer friendly software packages, such as Photoshop and the Internet, allow anyone with basic practical and creative skills to become producers, making their creations available for anyone with access to the Internet. In the last decade cyberspace has been cluttered with recycled images, texts and data; there seems to be few restrictions with regards to filtering, editing and authority – anyone with a two bit opinion, a photograph or a mere rumor can (together with mistakes in grammar, spelling, and sources) share their contents, leading to an overflow of untrustworthy information and a decline in editorial decision-making as hierarchical structures diminish. This new form of participation in media may assist grass-rooted democracies, as it allows users to actively contribute directly in the text; at the same time making it an opponent of traditional institutions, as it negatively affects market control, promotes an overload in diversity, allows for (negative) feedback, and damages a century old copyright system. The harsh remarks by the public television executives regarding legal action against those responsible for the uncontrolled act of doctoring copyrighted material and its after effect, exemplifies how this new media form of participation culture conflicts with the old media form of institutional authority. In the last decade the cultural industry endured many changes at all levels and accordingly, so did society. Let’s start with the city.

Contemporary cities are taking the shape of a spectacle as public spaces are bombarded and overloaded with images, messages, art, signs, texts and ads. Nowadays the street, the public stage of political movements, theater, playing children and social contact, are increasingly becoming virtualized with electronic screens and projections, taking away the public function of open space: “public functions become blurred by the flow of light and images drenching us in a fetish of alienating desires as we follow our necessary route through the city, from A to B.”[iii] Over the last decades our pubic space has gradually more been privatized; streets, squares and parks are more and more covered with brands and logo’s; public domains such as schools, universities, and libraries are ever more dependent on corporate sponsoring and turning in a shopping mall variant; public transport such as busses and trains are equally being privatized and transforming into mobile billboards. Furthermore, the city is converting into a pool of diversities; similar to the Internet the city is storing up an immense variety in cultural expressions and products. The uniform and the traditional costume have made room for an assortment of multiplicity; being different allows one to belong seems to be a fitting fashionable statement. However, the range of cultural expressions goes hand in hand with an overflow of dissimilar opinions, products and meanings. Not only does it become increasingly difficult to find your way, the devaluation of hierarchical control both on the Internet and in the city makes the whole thing superficial and lacking depth; of course there are places and sites that are reliable and insightful, yet they are getting more and more swallowed up by the homogenizing machine of shallowness. Whilst one might argue that diversity and egalitarian contribution lead to collective intelligence and the collapse of the cultural industry monopoly, marketing experts have already discovered that diversity is the defining issue for Generation X and that by incorporating an emphasis on diversity into their brands, they can enhance their market shares.[iv] Diversity marketing makes global expansion less costly; “rather than creating different advertising campaigns for different markets, campaigns could sell diversity itself, to all markets at once.”[v] Continue reading

Discussion about the Spinplant

By Laura van der Vlies.

On October 1st Geert Lovink posted a previous blogpost about the spinplant on the Nettime mailinglist. This was the beginning of what turned out to become a sprawling discussion. This is a summary of the original post, the discussion and the remaining questions.

Wikipedia’s ‘alertness’ was tested by posting an article about a fantasy-plant, the spinplant. The article was removed in less than two hours, which means that the system is working pretty well when it comes to removing fake articles. But the article was removed because the Wikipedia editor in question couldn’t find anything about the spinplant using Google. The question posed was whether Google was being given too much authority. Jos Horikx corrected the question: it should be whether a research of hits via Google is enough to judge the truth of an article on Wikipedia. He argued that an article on Wikipedia should, as a rule, be supported by its own resources in the first place. Patrice Riemens agreed with him, encouraging the use of Wikipedia and Google as useful instruments, but not to see them as solid fundaments for knowledge.

More reactions inspired Hendrik-Jan Grievink to write down his take on knowledge and its increasing fragmentation through the use of Wikipedia and Google. He also mentions the distinction between a literary culture and a culture of images. Grievink says that in a culture shaped by images, we have to search for knowledge whereas in a culture dominated by the written word one must ask for knowledge. Andreas Jabobs reacted to this statement, saying that knowledge and images are not comparable. He argued that knowledge no longer gets ’stored’ in human memory. Active knowledge is lost due to the increased use of images as a collection of knowledge. But Grievink responds that he does not equate knowledge and image, he only points at the fact that images are taken more and more as bearers of knowledge.

Theo Ploeg wonders whether Jacobs sees a difference between contact with reality via language on the one hand and image on the other. After this he continues with the connection between the existence of things and persons and their presence on the www.

As a reaction to this whole discussion the first real spinplant is born on the web. Elout made a spinplant in Sculptypaint, an opensource 3Dmodel creation tool. These models can be imported to for example Secondlife.

And Grievink reacts with a dictionary-discription of the spinplant [in Dutch]:

spin·plant (de ~)
1 fictieve plantensoort, ontdekt door Laura van der Vlies
2 neologisme dat nog wacht op indexering door GoogleNu maar water geven en wachten tot het woord “spinplant” uitgroeit tot een volwaardige internet meme, wellicht dat zij dan over enige tijd tot het Google-lexicon behoort. En dan komt het met de spinplant in Wikipedia ook wel goed! Heeft Laura via een omweg toch nog een bevredigend resultaat van haar experiment. Kan ze haar volgende onderzoek mee starten. Dat vereist wel wat medewerking van ons: een blogje hier, een onderzoekje daar, lezinkje zo, filmpje zus. Zo doen we dat: kennisproductie in de mediasfeer. Overigens, wanneer we deze status bereiken met dit virtuele stukje flora dan is de spinplant uiteraard geen spinplant meer, maar een officieel erkend woord der Nederlandse Taal. Wie was Van Dale ook alweer? Dat zal nog wel even duren, tot die tijd blijft de spinplant gewoon een spinplant!

De Spinplant is dood, leve de spinplant!

The discussion continued when one of the Masters of Media contributers, Michael Stevenson, reacted with a blogpost titled ‘Making the spinplant relevant: more from Friedrich Nietzsche‘. With this post he tried, with some help from Nietzsche, to change the terms of the debate, (jokingly?) asking whether truth is really ‘prior’ to relevance at all. He has asked readers to help make the spinplant more relevant by linking to the non-existing article on Wikipedia [http://nl.wikipedia.org/Spinplant] and two pages that were made to make the spinplant visible on the World Wide Web.

This post brought up more discussion, but also some confusion. Readers of the blogpost thought the aim was to put the spinplant back on Wikipedia again. But that isn’t the case. It is only to show the relationship between web-truth and relevance.

In any case, the story about the spinplant is not over yet.

Come Out and Play Festival

On September the 28th and 29th Amsterdam will be transformed into a huge playground.

A wide variety of big urban games will take place in the city……You may choose to take virtual penalties with your cell phone, play Snake live in the Westerpark or guard a VIP against snipers using a water pistol….


tjek dit

New Network Theory – Review

For three executive sunny days last week, the humanity studies faculty of the University of Amsterdam hosted the New Network Theory conference. This four party collaborative initiative – consisting of Amsterdam School of Cultural Analysis, Institute of Network cultures, University of Amsterdam, and the Hogeschool van Amsterdam – was to exploit the potential of formulating a post-Castellsian network theory which “takes technical media seriously”. Social Software and developments in technical media have influenced the Web, now is the time to document what exactly changed and formulate an up-to-date paradigm.

As I entered UVA grounds my eyes fell upon the two polyester banners waving above the entrance. Besides the name of the conference the banner contained a well designed background which seemed to resemble a gearing mechanism. In Dutch the word ‘raderwerk’ could describe the red, black and green mechanical wheels. ‘Raderwerk’ can be interpreted as “samengesteld geheel van menselijke organisatie” which in English translates into “compiled sum total of human organization”.

More artwork – a selection from the Places & Spaces: mapping science exhibition – decorated the conference hall.

After the essential coffee in the lobby Geert Lovink, Richard Rogers and Jan Simons officially opened a discussion that would take three days and two dinners. After some formalities Jan told an amusing, yet serious, anecdote of how a mere one and a half decade ago the University was linked to the outside world using one computer and a dial-up modem. The connected computer – set up by a former student – was the only place where teachers and staff members could email and browse the web, its location consequently developed into a sort of social gathering spot and perhaps in a sense a physical network surrounded this virtual interest. Thomas Elseasner spiced up the faculty with laser discs and multi media technologies; a department was born. After several name changes, nowadays Media Studies forms the largest part of the faculty of Humanities, and New Media – next to Journalism, Television Studies and Film Studies – is an independent research area.

The opening session involved à la mode themes: Google, security, and imagined networks; yet, the discussions were cerebral and insightful. Siva Vaidhyanathan – introduced by Richard as “you might know him from the Demetri Martin sketch shown on the Daily Show” – put forward a bold question: “what does Google give us besides ads in small fonts?” Google is googlizing everything, it is one company that directly influences culture, commerce and community. Paradoxically, whilst copying websites (in order to index), making editorial decisions in search results, recording user traffic, and storing generated user profiles, Google offers the illusion of democracy, precision and objectivity. We pay for Google with our data and allowing Google to taylor personalized advertments. In fact we are enthused and voluntary willing – and this is different in a disciplinary society, as suggested by Michel Foucault – to provide personal data, as well as accept privacy intrusion. Siva makes reference to disciplinary power as exemplified by Bentham’s Panopticon, a building that shows how individuals can be supervised and controlled efficiently. Institutions modeled on the panopticon have spread throughout society. Foucault exemplified this with the prison, which develops from this idea of discipline as it aims both to deprive the individual of his freedom and to reform him. The prison is part of a network of power that spreads throughout society, and which is controlled by the rules of strategy alone. Calls for its abolition fail to recognize the depth at which it is embedded in modern society, or its real function. However, Siva calls for a renewed approach to understanding this kind of consumer surveillance, one that pushes aside the worn-out model of the panopticon.

This brings me to on an idea loosely sketched out by Gilles Deleuze towards the end of his life, which suggests that in the Twentieth century we have moved from a disciplinary society to a more invasive society of control (Deleuze, 1995). This does not mean that disciplinary institutions have disappeared, but that their authority is no longer confined to particular institutions. Instead power is becoming integrated into every aspect of social life by increasingly interconnected networks.

In Control and Freedom: Power and Paranoia in the Age of Fiber Optics Wendy Chun draws on the theories of Deleuze and Foucault and argues that the relationship between control and freedom in networked contact is experienced and negotiated through sexuality and race. In her book Wendy makes use of an elaborate analysis of phenomena as Webcams and face-recognition technology to explore the current political and technological coupling of freedom with control.

However, Wendy’s presentation did not regard power and paranoia in the age of fiber optics; nonetheless, freedom and control are fundamental concerns also in Wendy’s discussion of imagined networks, Facebook, and ‘Free software movement vs. Open software movement’. Drawing from Benedict Anderson’s analysis of the nation as an “imagined community,” Wendy argues that we are witnessing the emergence of make imagined groupings -imagined networks – that are both less and more than communities or nations. In doing so, she does not argue for the distributed network as the model for our social interactions, bureaucratic organizations, or even our technologies, but rather asks: what needs to be in place for us to understand ourselves and our technologies as networked? How do social and technological abstractions coincide, diverge and inform each other? and how are these abstractions experienced, sensed, felt?

Wendy’s discussion touched upon three focus areas; firstly the privitations of openness, the seductions and limitations of mapping, and thirdly the temporality of networks – referring to speed versus the enduring ephemeral. Wendy describes networks as the structure and content of society and ultimately that of culture too. Network culture can be defined as “diagrammatic representation of interconnected events, processes etc. used in the planning of complex projects or sequences”.

When looking at social networks such as the immensely popular Facebook (when people in the audience with a Facebook account were asked to raise their hands, five – maybe six – people responded) contemporary network culture can be fleshed out. Facebook is a gated community; it is a network consisting of ones friends and friends of friends. Therefore Facebook is not a public space; there are private enclosures in its public spaces. Anonymity is not a big factor in Facebook and social networks alike; gender, race, religion, sexual preference, status make itself present. The much discussed utopian myth of cyberspace being a virtual place liberated from actual identity seems to shift to one which reflects the physical world. Obviously this technology and its content lend itself to matters such as public scrutiny and surveillance. On the other hand the network content is restricted to ones imagined community. Furthermore users seem to find ‘belonging’ more important than the content itself.

The underlying nature of network in the end is code. Consequently to know the code, is to know the building blocks of “the new agora” and how this space is connected to its physical resemblance. Open space and public space are not the same, consequently the free software movement and the open software movement are not synonymous, however one might ask what is the difference? Wendy, who is currently involved in an initiative called “open source imagined networks” asserts the difference lies in the network that is imagined. The new agora is not a place it is an assembly. The best way to view the network is to reject the map. So in the end it is not software which frames us, the public is a network and we are all nodes in it, social communities are mere private enclosures, so a valid question would be: how do we think beyond the map, or how do we think beyond the link? Which bring me to Warren Sack who discussed the transition from network publics to object oriented democracies.

Warren started his talk with an example of how publics get framed. Public opinion was framed within ten minutes when Bush put forward the act of war. Framing is an important subject and when analyzed from a historical perspective provides a greater understanding of ourselve and as nodes in a network. In order to analyze public as a network, Warren raises the question “how has the public been framed?” and proposes a new definition, that being an ‘object oriented democratic public’. Firstly Warren describes the public metaphorically, and then defines the public building forth on work by Dewey (1927) who characterized the public as state, and finally poses answers on how new technologies of representation can facilitate more democratic publics with richer measures, modes of visualization, and structures of participation.

Firstly Warren depicts the public metaphorically; initially as a physical system or mass, secondly a thermodynamic system, thirdly as an ecology in the sense that publics struggle for territory and cause interactions, fourthly as an organism – similar to a notion of McLuhan explaining the railroad as a new animal with new technology, and lastly as a network.

Warren then turns to Noortje Marres (2005) who has sought to address current representational shortcomings by offering a new metaphor – “object-oriented democratic politics”. The new metaphor is an effort to engage not only the subjects of politics, being the people that constitute a public, but also the objects of concern or contention, such as the issues that motivate a public’s organization. What would the software of an object orientated democratic public look like? Warren names two examples TXTmob (coordinate movement plus text messaging during demonstrations) and Metavid– comment at any moment in these debates.

Object-oriented programming, Warren continues, was invented more than 40 years ago and incorporates both a means for describing structures and processes. The definition of an “object” incorporates both a description of its structure and a definition of associated processes (usually called “methods” or “handlers”) that might be used to query or change the structure. For example, graphical computer interfaces are usually programmed using object-oriented methods. The interface’s structures—its buttons, windows, menus, and their arrangement—are defined as objects and then handlers are added to the objects to define what should happen if, for example, a user pushes a button or clicks the mouse on an item of a menu. “Object-oriented publics” improves upon the network metaphor insofar as it both incorporates a means for describing processes—the dynamics and changes that can occur over time—and a framework for retaining distinctions between opposing entities. It enables us to ask a new set of questions about publics and their actions. Soon, perhaps, it will be quite dated to imagine oneself as a node in a social network of Friendsters. Maybe, following the language of computer science, we will soon understand ourselves as “object handlers.”

The second day of the conference was formatted differently; instead of having the sessions in one hall, the talks were divided according to theme and spread across the vicinity. The conference booklet – containing a timetable, speaker background, and room numbers – turned into a sort of menu card, allowing you to shop for knowledge. I ordered a plenary session of Locative Media.

Nowadays everything in the media world gets tracked, tagged and mapped. Cell phones become location-aware, computer games move outside, the web is tagged with geospatial information, and geobrowsers like Google Earth are thought of as an entirely new genre of media. Spatial representations have been inflected by electronic technologies (radar, sonar, GPS, WLAN, Bluetooth, RFID etc.) traditionally used in mapping, navigation, wayfinding, or location and proximity sensing. We are seeing the rise of a new generation that is “location-aware”. This generation is becoming familiar with the fact that wherever we are on the planet corresponds with a latitude/longitude coordinate.

The term “Locative Media”, initially coined in 2003 by Karlis Kalnins, seems to be appropriate for digital media applying to real places, communication media bound to a location and thus triggering real social interactions. Locative Media works on locations and yet many of its applications are still location- independent in a technical sense. As in the case of digital media, where the medium itself is not digital but the content is digital, in Locative Media the medium itself might not be location-oriented, whereas the content is location-oriented. Thus wireless and mobile media have re-introduced questions of space and place. Cyberspace and the so-called ‘real world’ converge into what Lev Manovich called ‘augmented reality’, and in this ‘augmented reality’ it does not matter where you are. On the other hand, the technology lends itself to surveillance and control, thus in the end it might be important where you are? The network in most instances might be invisible but can you remain out of sight?

Adrian Mackenzie begins the session on Locative Media dealing with his paper on “wirelessness and radical network empiricism”. In some ways, Adrian Mackenzie states, wireless networks are very unpromising candidates for network theory. In contrast to the high-profile social software-based networking and organized network debates, they are quite banal, and they are often relatively invisible. They are certainly not the main hotspot of practices or changes associated with new media or technological cultures. However, wireless networks, despite being mundane, persistently associate themselves into the centre of media change in very diverse zones of the social. These include the areas of convergence between different infrastructures and places, (telephone, transport, domestic, commercial, etc), the intersections between ICT and development (ICT4D), the sheer proliferation of mobile gadgets, and last but not least, the question of networks and the body, in the case, in the form of altered bodily comportments, and fears around radiation. Across all of these areas, wireless networks merit interest because they epitomize very rapid transitions.

In conclusion Adrian has talked about radical network empiricism in order to make sense of the dynamism of wireless networks. It is not a complete network theory; instead Adrian’s discussion intended to say something about the kind of collective energies that animates wireless networks. What comes of putting together the antennae-focused algorithmic flows, the overflows of market-citizen, and the inconceivably rapid transitions associated with development? Wirelessness does not belong to any individual subject. It includes things, feelings, and images. It does not form a proper object of analysis, at least in a normal sense.

This is where James’ radical empiricism comes in. ‘Relations between experiences’ must be counted as just as real as the things experienced. This seems eminently well-suited to thinking about networks. The key support to analysis that James’ radical empiricism offers concerns how to give primacy to relations without presuming too much about what comes into relation, without saying too much about who or what experiences it. If we say that experience is a member of diverse processes, then immediately it suggests that our experience is not easily reducible to us, to the forms of agency and identity we can think. Instead, we have to think of that experience as a situation, as an ongoing temporal-spatial process, that overflows, that streams, that ‘falls forward’.

Locative Media open up new possibilities for users to engage socially and co-create texts. Currently there are perhaps as many maps as there are mapmakers, cell phones facilitate new forms of broadcasting and file sharing, and Bluetooth technology allows people to produce new-fangled interactions. Sophia Drakopoulou build forward from these notions and discussed the existence of a virtual space of data share and exchange which has the potential to be used as an environment for individual broadcast. Central in her research are two recent phenomena that have appear in the media: the worldwide hoax of ‘Toothing’ and the London phenomenon of ‘Happy Slapping’. ‘Happy Slapping’ has been inflamed by the media. This violent juvenile act involves unaware people being smacked whilst filmed by a mobile phone camera. The video is then widely distributed amongst the teens. The ‘toothing’ phenomenon was reported the media. The Guardian, Reuters and Wired magazine published articles last year about a new British trend called ‘Toothing’. This involved mobile phone users enabling their Bluetooth devices on the London Underground, to find strangers for casual sex encounters, on a station’s lavatories. A web forum was set up where ‘Toothers’ shared their experiences. This turned out to be a hoax. Sophia is asking why the Media always fantasizes about new technologies with sex and violence.

This is interesting in the context of current events in the Dutch Media regarding photo/video sharing activities in schools, leading to a request put forward by MP Arda Gerkens from the SP (Dutch party, currently in the opposition) to forbid cell phones in schools. The photographs and videos are pornographic in nature, they consist of pictures/videos taken by students of (naked) colleague students, or pornographic material downloaded from the internet and uploaded to the cell phone. The SP is arguing that pictures and videos of students spread via cell phones is a new form of a pedophilic network. Although not discussed by Sophia last week, it relates to the context discussed by Sophia; similar to the happy slapping videos teenagers think about narrative, they frame and direct a video, they create their own media. In bluetoothing nicknames (with a sexual connotation) express a desire to break from social conventions. These two phenomena of the use of the space of data share and exchange manifest an ataxia a break from the social order, via a projected ‘tele-deviance’.

It is interesting to see how new media technologies such as the cell phone facilitate new practices or remediates old ones. The cell phone distinguishes itself from other media in the sense that it converges many media in one device, one that is portable and familiar. The cell phone is, similar to the Sony Walkman as discussed by du Gay et al, a cultural artifact shaped by large, commercial, transnational enterprises, as well as the articulation of a number of distinct processes whose interaction has lead to a variable and contingent outcome; interlinked processes like representation, identity, production, consumption, and regulation.

Which brings me to actor-network theory, which tries to explain how material-semiotic networks come together to act as a whole.

On the last day the closing session concerned actor-network theory. Noortje Marres discussed the network addressed from the home, more precisely, the eco home as a site of network entanglements. Noortje says wind energy and other alternative energy plus decentralization of energy economy will assist a grass-rooted form of democracy, a ‘do it yourself citizenship’. Following Saskia Sassen, Noortje states the decentralization of energy infrastructure leads to democratization. By using the eco home to as the center of a network critique, her talk touches upon John Dewey, machines of (dis)affectedness, and how the home shifts from a defense/shelter to a public place with collective practices (such as collective laundry).

At the end of the day there was time for feedback and criticism. Alan Liu asked why the organizers had chosen the applied, rather academic format, and not a more open ad-hoc arrangement. Geert replied that in order to get sufficient funding the conference had to have an academic set-up. Moreover, the organizers were told the conference lacked in academic quality/arrangement and was neglected full funding. The initial venue was much nicer and offered more possibilities for tentative conference formats. Richard added that the suggested open arrangement, where unplanned speakers spontaneously write their name on a board at the moment they like the topic being discussed, usually get held in the woods or at a camping site. Richard has been to few of these camp out events (what the hack) where “you get to know one another in ways you normally would not imagine”. Nonetheless, a set-up step up is necessary. Richard mentioned a suggestion made by Michael Stevenson to have speakers SMS their abstracts, within 160 characters, prior to their presentation.

They say a blockbuster can be pitched within seven words. Titanic (1997) can be translated in “Romeo and Juliet on a boat”. In a SMS that would look something along the lines of “Ro&Ju ona boat”, being 14 characters. Leaving 146 letterings to provide a scholarly insight.

Multimedia Learning: a video by Eva Kol & Roman Tol

Part 1

Part 2

In this presentation Eva Kol and Roman Tol argue that multimedia presentations, that is presentations with the use of external visual tools, are an excellent instrument for stimulating educative progress.

Continue reading