A few weeks ago I started a new project. This is a Crawling / Scraping framework aimed to make easy the way we extract data from the web and store it in a relational database.
Today I released the early version 0.0.4 and I wrote several examples wich explains what the framework can do. I promise to make more real world examples and more documentation in the next days. In the mean time you can follow the project advances on the official repository at github and play with the examples.
You can also download crawley from pip running:
~$ pip install crawley
and check the documentation.
That’s all for now. Keep watching the repository =).