Skip to content

Instantly share code, notes, and snippets.

@pkaramol
Created October 6, 2015 12:09
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pkaramol/1b39c04e25d6c72c5245 to your computer and use it in GitHub Desktop.
Save pkaramol/1b39c04e25d6c72c5245 to your computer and use it in GitHub Desktop.
scrapy crawl alexa
/home/pkaramol/Workspace/scrapytests/scrapytests/spiders/example.py:3: ScrapyDeprecationWarning: Module `scrapy.contrib.spiders` is deprecated, use `scrapy.spiders` instead
from scrapy.contrib.spiders import CrawlSpider, Rule
/home/pkaramol/Workspace/scrapytests/scrapytests/spiders/example.py:4: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead
from scrapy.contrib.linkextractors import LinkExtractor
2015-10-06 15:09:02 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapytests)
2015-10-06 15:09:02 [scrapy] INFO: Optional features available: ssl, http11
2015-10-06 15:09:02 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapytests.spiders', 'SPIDER_MODULES': ['scrapytests.spiders'], 'BOT_NAME': 'scrapytests'}
2015-10-06 15:09:02 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-10-06 15:09:02 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-10-06 15:09:02 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-10-06 15:09:02 [scrapy] INFO: Enabled item pipelines:
2015-10-06 15:09:02 [scrapy] INFO: Spider opened
2015-10-06 15:09:02 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-10-06 15:09:02 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-10-06 15:09:02 [scrapy] DEBUG: Crawled (200) <GET http://www.alexa.com> (referer: None)
2015-10-06 15:09:03 [scrapy] DEBUG: Crawled (200) <GET http://www.alexa.com/topsites> (referer: http://www.alexa.com)
Retries in 1: 1
2015-10-06 15:09:03 [scrapy] DEBUG: Crawled (404) <GET http://www.alexa.com/siteieekeknfo/google.com> (referer: http://www.alexa.com/topsites)
Retries in 2: 2
2015-10-06 15:09:03 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.alexa.com/siteieekeknfo/google.com> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2015-10-06 15:09:03 [scrapy] INFO: Closing spider (finished)
2015-10-06 15:09:03 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 768,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 18821,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/404': 1,
'dupefilter/filtered': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 10, 6, 12, 9, 3, 632759),
'log_count/DEBUG': 5,
'log_count/INFO': 7,
'request_depth_max': 3,
'response_received_count': 3,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2015, 10, 6, 12, 9, 2, 248532)}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment