The aim here is to demonstrate a method of distributing work using the built in F# agent across multiple nodes in parallel the result of crawling one page might result in finding multiple new pages to fetch. This is a recursive process which will continue until no new URLs are found. The main focus is how to process a potentially indefinite queue of work across a pool of workers, rather than how to parse web pages.
6 people like thisPosted: 8 years ago by Daniel Bradley
The snippet extends a web crawler from snippet http://fssnip.net/3K. It synchronizes all printing using an additional agent (so printed text does not interleave) and the crawling function returns an asynchronous workflow that returns when crawling completes.
0 people like thisPosted: 1 year ago by Tomas Petricek