crawl is a web crawler. In its current form is is mostly suitable for generating load on a web server.
- adustable load crawling
- specifying what found urls to crawl by patterns
- broken link report, listing links that are broken and the pages those links are found on.
- tries to honor robots.txt files
- does not honor nofollow meta tags or link attributes
- doesn't do anything with crawled pages