crawling setup

With the 'Infome Imager' you send out crawlers on the web. The crawlers collect data that is visualized and printed or experienced as interfaces to the webpages the crawler has visited. The collected data is not about the written content of the pages, but about the subconscious information, the thumbprints of the creators and artifacts of the systems, which is hidden in HTML tags and HTTP headers.

On this page you define from were, how long and in what way the crawler should move across the Web. On the next page you will set what to collect and how to visualize it.

1. The crawler should start from:
     the webpage
2. The crawler should visit webpages.
3. The crawler should:
     follow all links, even if it has visited the page that is linked to before. The crawler can end up going in circles, which might create interesting patterns in the visualization reveling the structure of the sites visited.

not follow links to pages it has already visited. The data will not have repetitions. This option makes the crawler slower since it has to remember where it has been.
> >
~ home