Infome Imager Description

Infome - noun, from: information + ome (- suf., all, the totality of, (as in genome))

The Infome Imager allows the user to create "crawlers" (software robots, which could be thought of as automated Web browsers) that gather data from the Web, and it provides methods for visualizing the collected data. Some of the functionality of the Infome Imager software is similar to a search engine such as Google, but with some significant differences. The search engine crawler collects data about the intended content of a page, the actual words written by one person, the Web author, in an (questionable) effort to index the Web according to the "meaning", the semantics, of Web pages. The Infome Imager crawler collects "behind the scenes" data such as the length of a page, when a page was created, what network the page resides on, the colors used in a page and other design elements of a page etc. It glances down into the subconscious of the Web in hopes to reveal its inherent structure, in order to create new understandings of its technical and social functionalities. Another difference lies in the way the data is presented to the user. The search engine uses algorithms to sort the data according to one theory or another, in order to present the user with pages containing a few selected links each. The user is not allowed to see the actual data, but a subset of it, selected and sorted by a computer. The result of an Infome Imager "search" is an image with all collected data, potentially a vast amount of information, presented in a way in which the human brain, not the computer, is put to work on what it does so well - creating intuitive understandings of large quantities of information.

The Infome Imager interface allows the user to manipulate the crawler's behavior in several ways. The user decides where it should begin crawling; it could for example start on a Web page specified by the user, a page resulting from a search on a search engine, or on a random Web page. The crawler can be set to either visit a page once or every time it encounters a link to it. The data resulting from many revisits will create repetitive patterns in the visualization, revealing the linkage structure of the Web sites, while data resulting from single visits will generate distinct data. The crawler can take many hours depending on the amount of pages it should visit. The activity and the result of the crawler can be accessed from the "manifestations" page. The visualizations created by the crawling process functions as an interface linking to all the sites the crawler visited.

The crawler and data mapping software that together form the foundation for the Infome Imager software was originally developed for the "Mapping the Web Infome" show exhibited in conjunction with "Lifelike" at New Langton Arts in SF in July 2001.

Many thanks to Michael Proctor and Brett Stalbaum for help with programming and ideas.
 
~ home