STATEMENT OF THE PROJECT

Introduction:

I work from the premise that :

"1, Messages are invoked, transmitted, sent back, expelled, drawn in, given this or that scenario depending on different tastes and situations, and whatever form they take, the messages revolve around receptors, which are now located at the center (in contrast to the image presented by the mass media).
2, The established differences between author and reader, performer and spectator, creator and interpreter become blurred and give way to a reading-writing continuum that extends from the designers of technology and networks to the final recipient, each one contributing to the activity of the other (disappearance of the signature). 3, The divisions that separate the messages or 'works', which appear as microterritories attributed to "authors" tend to become obliterated. Each and every representation may be subject to sampling, mixing, re-utilization, and so forth. According to the emergent pragmatism of creation and communication, nomadic distribution of information fluctuate in an immense deterritorialized semiotic backdrop. It is therefore natural that the creative effort is shifting away from the messages towards the devices, the processes and languages, the dynamic "architectures", and environments."1

The artist's role is within this context infinitely rich, multifaceted and challenging.

The creative project presented here - The Stillman Projects - is an on-going project that started as a response to the merging of the concepts of reading and writing in hypertextual spaces like the World Wide Web (WWW) and has become a multifaceted enterprise feeding off, and commenting on, the blurring of the distinction between product development, research and art in contemporary culture. The Stillman Projects provide systems for automated, individual and/or collaborative information filtering that aim to aid in the increasingly difficult task of navigating the WWW as well as generate alternate contexts for the navigated information. The core of the project is a software that resides on a WWW server. The software allows the people browsing a site that uses the software to leave paths in the text and to "read" the paths left by people who have navigated the site before them.



Contextualization:

The discursive nature of the development of information technologies allows serendipitous characteristics to emerge - characteristics that were not intended or wished for, but that have an interesting potential. While the ideas and investigations that laid the foundations for the development of computers were focused on creating a device that could, with the means of universal communication, resolve political conflicts by rinsing them of the a-posteriori noise of natural language 2, the potential of these technologies are today of a quite different nature. The Internet promises not only "solutions for a small planet"3 in the form of global communication but also opens up a sea of potentially problematic issues concerning data-theft, copyright, ownership and privacy. If we look at these implications as being just unwanted side effects instead of seeing them as "healthy" anomalies, urging change, we will have to create ad hoc hypothesizes to make sense out of the situation we find ourselves in.
Since Descartes, being human has, in the Western world, been described as being in possession of a conscious homogeneous self. This self is manifested and strengthened by creativity and threatened when its integrity is violated. Information technology poses an obvious threat to this monolithic self. The lack of one source of meaning in hypertextual systems and the difficulties of claiming authorship of ideas in those systems, robs this self of its way to manifest itself. "Smart technologies" record the user's actions, interactive information systems track the user's choices and a virtual environment knows everything about the user's navigation. The collection of and the possibility for use of this data endangers the integrity of this self.
In an attempt to avoid the complications of this situation there is a tendency to deny it. Nicholas Negroponte argues in his popular book "Being Digital" 4 that because the technologies of software agents, e-mail not being addressed to a specific location but to a person, and the possibility of very direct advertising on the internet, people are no longer a demographic unit but "themselves". The specific needs of the individual are met and the self is strengthened by being fed what it desires. It is also claimed, even though that claim rests on an obvious categorical confusion between persona and self, that the possibilities to create multiple personas in MUDs, MUSHes and MOOs strengthens the self by giving a self the power of creating and being in charge over those. The multiple personas are then an extension of the self and through this the self is not only saved from fragmentation, but augmented by multiplication. The media theorist Margaret Morse comes to a similar conclusion when looking at user control in virtual environments. A situation that offers multiple narrative paths and "personhoods"5 extends the realm over which a subject has control.
It is not only technology utopians like Negroponte that rally to the defense of the Cartesian self. Following Hal Fosters analysis of the return to the body and personal narratives in art in recent years, it is apparent that a retreat to a "fascistoid subjectivity" and a resurrection of the author is also tempting in the art world.6 Foster grounds his discussion of the modern notion of an autonomous self and the fear of fragmentation in Lacan's theories of "the mirror stage". Lacan argues that the ego is first created when the child in a reflection perceive his body as a whole. The idea of this unity also establishes a fantasy of a chaotic state of the body still in pieces - a state which the ego is set up to combat. Otherness, as evidence of chaos - inside (sexuality, the unconscious) and outside (gays, Jews, communists, women and so on) - has to be repressed. The strength and wholeness of the ego is insured in modernist cultural traditions by its emphasis on the subject, expression and authority. Foster asks if this "fascistic reaction" has returned in contemporary cultural practice. What we see happening in art now with a return to subjectivity, expression and personal narratives is a similar response, except that the nature of the threat has changed. The threat was before grounded in industrialization - "a history of world war and military mutilation, of industrial discipline and mechanistic fragmentation, of mercenary murder and political terror"7. In a society whose power structures and economy are increasingly based in the handling of information, the threat stems from the networking of databases and the non-organization of information in hypertextual systems. It is the threat of losing control over personal information, loss of one source of meaning and the loss of someone to blame that makes artists come to the conclusion that they need to "bring humanity into technology".8 (The increasing interest in conspiracy theory in media and art - a search for one source and one meaning behind the scattered appearances - could be interpreted as a response to this situation9)
The threat has changed giving new fuel to the defense of the humanist subject, as well as to the questioning of such defense. The post-structuralist critique of the modernistic concept of the autonomous self - a process that has given us an understanding of the self as being fragmented and decentralized - could now, together with its art, provide a basis for the new critique and practice that needs to be formulated. A discourse that questions what information technology does to the concept of "humanity" instead of merely imposing a model of an armored self sprung from a fear of that which we don't understand.
But even if (and maybe because) post-structuralist theory (with its questioning of authorship, emphasis on the reader, readings, and non-linearity) is very useful in the critical discourse surrounding hypertextual/ interactive systems, it doesn't give us any easy answers. Hypertext, and interactive art (if it utilizes its potentiality), is in some way, a realization of post-structuralist theory. According to the German media theorist Friedrich Kittler, Jacques Derrida once uttered "if there had been no computer, deconstruction could never have happened"10 and the reverse would probably be true too - if there had been no deconstruction, hypertext could never have happened. In a hypertextual situation the author is disappearing, there is no one narrative and the reader/wreader11/user constantly
oscillates between looking at and looking through the text. As David Jay Bolter argues in "Writing Spaces"12, electronic text incorporates criticism into itself since the distinction between writing and interpreting a text disappears just as the sharp distinction between reading and writing does. If the World Wide Web could be considered one text that is rewritten over and over again, then intertextual relationships become so explicit that it is questionable if a contextual interpretation is possible at all - if there is such a thing as a context at all.13 The texts that post-structuralist theories were set out to critique were stable and authoritative with master narratives to deconstruct - texts attempting to be transparent with no explicit awareness of the sign level. Hypertexts and interactive systems are always (though this is not always utilized by their creator) aware of their structure since the computer/the system demands to be addressed - it forces the user to move between reading the verbal text/narration/"content" and interpreting the structural elements in order to navigate through the information. There is no solid wall to bounce a critique against. The critique has collapsed into the very text it is trying to critique.
What tools does an artist working in these media have at his/her disposal in a situation like this? Knowing that "content" is only a small player in the game, only the very brave, the very cowardly or the very stupid artist can see the providing of that as their main responsibility. Following the thoughts of Pierre Levy (above), the most obvious choice of method is to focus on the only "text" that can have any solidity in the end - the system of communication - and to position the place where art occurs in the relation between work, artist and audience. This of course builds on the premise that the artist wants control over the work, that the artist needs control. You can not control the content, but you can control the system where content can be created, where meaning can occur. Maybe the most promising aspect of the hypertextual space is that we can get lost in it and maybe the most interesting tool for the artist now is loss of control. Even if the creation of comprehensive maps for the navigation of hypertextual spaces is an important task, maybe the way to really utilize the possibilities with these new media is to set up contextual relations using randomness and deliberate meaninglessness.14

Virtual Void:

I wanted to investigate how the "problems" stemming from the development of networked information technologies (loss of privacy breakdown in the concept of authorship and ownership etc.) could be seen as anomalies forcing us to reevaluate concepts like "humanity" and "reality". I made a Virtual Reality installation - Virtual Void - where the interaction between the artist, the work and the audience, served as a model of the interaction in large computer networks. By making it non-networked I avoided the complexity of a networked environment and could focus the audience's attention on a few specific issues. In the installation, one person at a time, from the audience, was immersed in a virtual environment (Plate A). When the system started up the first day of the exhibit, the environment consisted of one single room. When the immersant opened a door a new room was created for the immersant to enter. A random process decided the number of doors and their configuration in the new room. The rooms that had been created remained the same. The immersant did not know if he/she had created a new room or not, since the rooms all looked very similar. From the immersant's perspective he/she was trapped in an endless maze of rooms. The information created by the immersive environment and the user's navigation through it was mapped. Each room that had been created by the immersant in the environment was represented on "the map", as well as the immersant's current position. The mapping process also remembered how many times a room had been visited and a room's representation on "the map" got darker each time someone entered the room. The immersant could not see "the map" but had the possibility to change it with his/her navigation. The audience on the other hand had no ability to effect "the map" but could see what was going on - where the immersant was and if he/she had created new rooms or not. The system was operational during the whole week of the installation and the mapped environment was constantly evolving as a result of the different users' navigation. The audience, not currently using the head-mounted display, viewed the immersant as he/she was "moving" through the virtual environment, and observed the creation of new rooms on "the map" which was projected on the wall next to the immersant.
By refusing to submit to a traditional communication model with the artist transmitting meaning by means of the work to the audience, the project reflected on a hypertextual system like the WWW where meaning is woven rather than given. The audience was forced to become participants, by becoming the immersant and navigating through the space and/or by interpreting "the map" i.e. to actively participate in the transformation of information into meaning. The parallel to interaction in a large computer also worked on another level - the meaning that might emerge in "the map" was not dependent on the information about one single participants' navigation, but the information collected from all immersants. A room gets darker on "the map" not only if one specific immersant has been there many times, but if that room has been visited many times by all the previous users, like the paths on a lawn emerging over time as people walk over it. Each immersant can not directly control the appearance of the map - their participation is only a part of the whole.
The audience responses during the installation reflected a range of reactions. Some were angry for not being served "content" - an evidence of one mind, one self expressing itself. Some were frustrated over being asked to participate, by navigating through the space, but not receiving a "reward" for that work. They wanted to see a result of their work: to know where they had gone and which rooms they had created. Or they wanted at least to have a pleasurable experience in the environment, not feeling like a rat in a maze. Others enjoyed participating in a "collaboration" that didn't have an obvious outcome or goal. Some liked getting lost in the environment and the speculations about the possible meanings of "the map." Several questions where formulated over the week. From "When does this map start to say something meaningful?" and "If we can't search for the meaning in the intention of one source or in one receiver's interpretation, how are we able to determine when and how meaning occurs ?" to more profound questions like "If information technology poses a real threat to the Cartesian self, where does that leave us - are we in a process of redefining what it is to be human or should we defend our constructs and try to change the direction that this technology has taken?"

The Stillman Projects:

While Virtual Void was an attempt to explore implications of hypertextuality and networked information technologies in a "petri-dish" environment "The Stillman Projects" exists within the full complexity of a large network - the World Wide Web. Instead of isolating a few issues, it generates a multitude of tangents. It is a parasitic art-system that rather than utilizing a self-contained implosive aesthetic wants to infiltrate and be infiltrated by the aesthetics and politics of its hosts.
The name "Stillman" is borrowed from Paul Auster's novel, City of Glass, in which the character Daniel Quinn is paid to follow Peter Stillman Senior through the streets of New York City. At first, Quinn is focusing on Stillman's behavior on a "street scale" level; mainly what kind of junk he is picking up from the ground. After a number of days, Quinn becomes increasingly hopeless when Stillman's doings still seem random and meaningless and tries something new; he starts sketching the movements of Stillman from a birds eye perspective. Each day's walk seems to have spelled a letter, mapped out by the grid system of the streets of Manhattan, and the letters seem to spell out words. Only when Quinn maps his information differently does it become meaningful.
"The Stillman Projects" is an on-going project which dates back to the Spring of 1996. It was first implemented on the web in August of 1997. The main idea was to be able to follow another user through the text or "slip" on frequently, or infrequently, used links and create trails through a hypertext. In this the first stage of the Stillman Projects on the web the hypertext links on stillmanized sites contain collective information about its users/usage. The information is collected on a web page by asking the user a question. When the user answers the question he/she is assigned a color - red, green or blue - based on the answer given. A users color is stored as a persistent cookie on his/her computer. (Another way of assigning a color could be to base the color coding on some aspect of a users domain name.)The way the information is collected, for example how and where a question is asked, is dependent on the specific intentions of the host site. As the user navigates the site, a trace of his/her color will be left on all links he/she chooses throughout the site. The information is stored in the link's color specification and is presented to the people navigating the site visually by a change of the color of the link. The color of a link tells how the people following the link responded to the question. The users can thus in addition to the ordinary way of selecting links choose to follow a link dependent on its color i.e. dependent on what "type" of people have followed the link. The information can also be presented functionally. The user can in some of the now stillmanized sites/pages choose to follow one of the three most saturated links (the most red, green, or blue link) - a method referred to as slipping.
The information contained in a link is not intended to communicate any information about a specific person. The information is not meaningful unless it is combined with, and contextualized by, the content of a page and the information about other users. The person navigating a site is leaving information without getting the satisfaction of saying "look what I did", without getting acknowledgment for their personal contribution. To get a payoff you have to look at the system/community level, at the "patterns" that emerge from different peoples navigation and the "collective intelligence" that could emerge in the those patterns. The World Wide Web is not a book with pages you read, it is an environment shared by everyone navigating it. By leaving information behind, by sharing your experience, you make the environment smarter, and an intelligent environment could help you. To benefit from the uncertainty and disorientation of hypertextual space we need to let go of the need of control and orientation that comes with the defense of the individual. We need to look at ourselves as parts of a network rather than autonomous entities interacting with a network.

1 Pierre Levy, "The Art of cyberspace" in Electronic Culture -Technology and Visual Representation. ed. Timothy Druckrey, Aperture, New York, 1996. Translation Karin Lundell. Originally published in L'Intelligence collective. Pour une anthropologie du cyberspace, La Decouverte, Paris, 1994

2 A line of thought that originated in mystical attempts of accessing and gaining knowledge from a pure platonic a priori matrix brought into modernity with Gottfried Wilhelm Leibnitz (1646-1716). His vision of a universal language, that would break down communication obstacles between people and save them from eternal conflicts steaming from misunderstanding, drove him to develop a groundwork for modern symbolic logic as well as thoughts about machines capable of reasoning. All problems can be solved if they are broken down to their essence and tried in a system of logical propositions. His characteristica universalis -a universal set of characters with a one to one correspondence to what they represented- would erase all the ambiguity of natural language. This set of characters formed an artificial language -a lingua philosophica , that would erase the gap of interpretation between the signifier and the signified, between symbol and meaning, sender and receiver. Leibnitz saw that the logical operators of his language were performed in a mathematical way and found that this process could be done automatically without human reasoning and he started to conceptualize a machine that could do that mechanically -the calculus ratiocinator. Leibnitz believed that the human mind ultimately should imitate God who knows things un-mediated and direct, without the folds and obstacles of experience.

3 IBM television commercial 1996 a spin on McLuhan's idea of the global village?

4 "In being digital I am me, not a statistical subset...True personalisation is now upon us" Nicholas Negroponte Being Digital

5 Margaret Morse Landscape and narrative in virtual environments.

6 Hal Foster The Return of the Real, MIT Press, Mass. 1996, p.210

7 ibid.

8 For Example: Joe Squire describes the work in the 1995 exhibition Signal as Art: Inside the Loop that he curated "..work that prioritizes empathetic, human content and contact... " and the artists: "[they] have set out to do what artists through the ages have always done, namely to give expression and coherence to the human issues of the day." But what is "human content", what does "human issues" mean and what is "humanity"?

9 Chris McAuliffe "The Illuminarti" in World Art 2/1996.

10 Friedrich Kittler in interview with Laurence Rickels in Artforum December 1992

11 The term wreader is used in hypertext theory to describe the person coming out of the merging between reading and writing in hypertextual situations. It might not be completely accurate to use one single term for the reading-writing continuum, but it is useful as an indicator of a general tendency.

12 David Jay Bolter Writing Spaces, Lawrence Erlbaum Associates, Hillsdale New Jersey,1990

13 Mireille Rosello "The screeners map" in Hyper/Text/Theory Ed. George P. Landow, The John Hopkins University Press, Baltimore and London, 1994. p.132

14 ibid p.134




Home Page