Skip to content

Instantly share code, notes, and snippets.

@rmax
Last active April 7, 2021 18:37
Show Gist options
  • Save rmax/e618285990ab00d0760f to your computer and use it in GitHub Desktop.
Save rmax/e618285990ab00d0760f to your computer and use it in GitHub Desktop.
An example of a Scrapy spider returning a Twisted deferred.
from scrapy import Spider, Item, Field
from twisted.internet import defer, reactor
class MyItem(Item):
url = Field()
class MySpider(Spider):
name = 'myspider'
start_urls = [
'http://scrapinghub.com',
'http://scrapy.org',
]
def parse(self, response):
d = defer.Deferred()
reactor.callLater(15, d.callback, MyItem(url=response.url))
self.log("Returning item in a few seconds...")
# What? Are we returning a deferred? That's crazy!
return d
$ scrapy runspider myspider.py
2015-01-30 16:23:13-0400 [scrapy] INFO: Scrapy 0.24.4 started (bot: scrapybot)
2015-01-30 16:23:13-0400 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2015-01-30 16:23:13-0400 [scrapy] INFO: Overridden settings: {}
2015-01-30 16:23:14-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-01-30 16:23:15-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-01-30 16:23:15-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-01-30 16:23:15-0400 [scrapy] INFO: Enabled item pipelines:
2015-01-30 16:23:15-0400 [myspider] INFO: Spider opened
2015-01-30 16:23:15-0400 [myspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-01-30 16:23:15-0400 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-01-30 16:23:15-0400 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-01-30 16:23:15-0400 [myspider] DEBUG: Redirecting (302) to <GET http://scrapinghub.com/> from <GET http://scrapinghub.com>
2015-01-30 16:23:15-0400 [myspider] DEBUG: Redirecting (302) to <GET http://scrapy.org/> from <GET http://scrapy.org>
2015-01-30 16:23:15-0400 [myspider] DEBUG: Crawled (200) <GET http://scrapinghub.com/> (referer: None)
2015-01-30 16:23:15-0400 [myspider] DEBUG: Returning item in a few seconds...
2015-01-30 16:23:15-0400 [myspider] DEBUG: Crawled (200) <GET http://scrapy.org/> (referer: None)
2015-01-30 16:23:15-0400 [myspider] DEBUG: Returning item in a few seconds...
2015-01-30 16:23:30-0400 [myspider] DEBUG: Scraped from <200 http://scrapinghub.com/>
{'url': 'http://scrapinghub.com/'}
2015-01-30 16:23:30-0400 [myspider] DEBUG: Scraped from <200 http://scrapy.org/>
{'url': 'http://scrapy.org/'}
2015-01-30 16:23:30-0400 [myspider] INFO: Closing spider (finished)
2015-01-30 16:23:30-0400 [myspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 846,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 9733,
'downloader/response_count': 4,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/302': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 1, 30, 20, 23, 30, 916865),
'item_scraped_count': 2,
'log_count/DEBUG': 10,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 4,
'scheduler/dequeued/memory': 4,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'start_time': datetime.datetime(2015, 1, 30, 20, 23, 15, 218412)}
2015-01-30 16:23:30-0400 [myspider] INFO: Spider closed (finished)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment