Monday, August 22, 2016

Scrapy Tutorial



Pre-requirsite


#Install Python Package Index
sudo apt-get install python-pip

pip search scrapy
pip install scrapy
pip show scrapy
#pip uninstall scrapy


An instruction from Scrapy.org


nutt@nutt-pc:~/OneDrive/Scrapy$ scrapy startproject tutorial
New Scrapy project 'tutorial', using template directory '/usr/local/lib/python2.7/dist-packages/scrapy/templates/project', created in:
    /home/nutt/OneDrive/Scrapy/tutorial

You can start your first spider with:
    cd tutorial
    scrapy genspider example example.com


nutt@nutt-pc:~/OneDrive/Scrapy$ tree tutorial/
tutorial/
├── scrapy.cfg
└── tutorial
    ├── __init__.py
    ├── items.py
    ├── pipelines.py
    ├── settings.py
    └── spiders
        └── __init__.py

2 directories, 6 files

Create new dmoz_spider.py in spiders directory
and modify items.py on top directory


nutt@nutt-pc:~/OneDrive/Scrapy/tutorial$ scrapy crawl dmoz
2016-08-22 23:08:01 [scrapy] INFO: Scrapy 1.1.2 started (bot: tutorial)
2016-08-22 23:08:01 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'tutorial'}
2016-08-22 23:08:02 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-08-22 23:08:02 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-08-22 23:08:02 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-08-22 23:08:02 [scrapy] INFO: Enabled item pipelines:
[]
2016-08-22 23:08:02 [scrapy] INFO: Spider opened
2016-08-22 23:08:02 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-08-22 23:08:02 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-08-22 23:08:03 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/robots.txt> (referer: None)
2016-08-22 23:08:04 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2016-08-22 23:08:04 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2016-08-22 23:08:04 [scrapy] INFO: Closing spider (finished)
2016-08-22 23:08:04 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 734,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 16908,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 8, 22, 16, 8, 4, 322225),
 'log_count/DEBUG': 4,
 'log_count/INFO': 7,
 'response_received_count': 3,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2016, 8, 22, 16, 8, 2, 550009)}
2016-08-22 23:08:04 [scrapy] INFO: Spider closed (finished)


nutt@nutt-pc:~/OneDrive/Scrapy$ tree tutorial/
tutorial/
├── Books.html
├── Resources.html
├── scrapy.cfg
└── tutorial
    ├── __init__.py
    ├── __init__.pyc
    ├── items.py
    ├── pipelines.py
    ├── settings.py
    ├── settings.pyc
    └── spiders
        ├── dmoz_spider.py
        ├── dmoz_spider.pyc
        ├── __init__.py
        └── __init__.pyc

2 directories, 13 files

Have new 2 files created Books.html and Resources.html that are scrapy.http.Response objects file of scrapy.Request objects











No comments:

Post a Comment