Skip to content

Instantly share code, notes, and snippets.

@dangra
Created May 24, 2012 14:04
Show Gist options
  • Star 9 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save dangra/2781744 to your computer and use it in GitHub Desktop.
Save dangra/2781744 to your computer and use it in GitHub Desktop.
Scrapy - delay requests in spider callbacks
from scrapy.spider import BaseSpider
from twisted.internet import reactor, defer
from scrapy.http import Request
DELAY = 5 # seconds
class MySpider(BaseSpider):
name = 'wikipedia'
max_concurrent_requests = 1
start_urls = ['http://www.wikipedia.org']
def parse(self, response):
nextreq = Request('http://en.wikipedia.org')
dfd = defer.Deferred()
reactor.callLater(DELAY, dfd.callback, nextreq)
return dfd
$ scrapy runspider delayspider.py
2012-05-24 11:01:54-0300 [scrapy] INFO: Scrapy 0.15.1 started (bot: scrapybot)
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Enabled item pipelines:
2012-05-24 11:01:54-0300 [wikipedia] INFO: Spider opened
2012-05-24 11:01:54-0300 [wikipedia] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-05-24 11:01:54-0300 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-05-24 11:01:56-0300 [wikipedia] DEBUG: Crawled (200) <GET http://www.wikipedia.org> (referer: None)
2012-05-24 11:02:04-0300 [wikipedia] DEBUG: Redirecting (301) to <GET http://en.wikipedia.org/wiki/Main_Page> from <GET http://en.wikipedia.org>
2012-05-24 11:02:06-0300 [wikipedia] DEBUG: Crawled (200) <GET http://en.wikipedia.org/wiki/Main_Page> (referer: http://www.wikipedia.org)
2012-05-24 11:02:11-0300 [wikipedia] INFO: Closing spider (finished)
2012-05-24 11:02:11-0300 [wikipedia] INFO: Dumping spider stats:
{'downloader/request_bytes': 745,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 29304,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2012, 5, 24, 14, 2, 11, 447498),
'request_depth_max': 2,
'scheduler/memory_enqueued': 3,
'start_time': datetime.datetime(2012, 5, 24, 14, 1, 54, 408882)}
2012-05-24 11:02:11-0300 [wikipedia] INFO: Spider closed (finished)
2012-05-24 11:02:11-0300 [scrapy] INFO: Dumping global stats:
{}
@osya
Copy link

osya commented Jan 2, 2015

I tried to use it with Scrapy 0.24.4 and the following error occures "Spider must return Request, BaseItem or None, got 'instance' in <GET http://...>". Please advise

@tarunlalwani
Copy link

@osya, did you yield the deferred? you need to return it, for it to work

@shirk3y
Copy link

shirk3y commented Apr 19, 2017

I know it's outdated, by why not use download_delay attribute?

class MySpider(BaseSpider):

    name = 'wikipedia'
    download_delay = 5

Or you can just set DOWNLOAD_DELAY = 5 in settings.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment