Skip to content

Instantly share code, notes, and snippets.

@nramirezuy
Last active August 29, 2015 14:21
Show Gist options
  • Save nramirezuy/65faa56c7eab1e117d77 to your computer and use it in GitHub Desktop.
Save nramirezuy/65faa56c7eab1e117d77 to your computer and use it in GitHub Desktop.
Boro, where is my traceback?
2015-05-13 14:42:49+0000 [scrapy] INFO: Scrapy 0.25.1 started (bot: )
2015-05-13 14:42:49+0000 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-05-13 14:42:49+0000 [scrapy] INFO: Overridden settings: {}
2015-05-13 14:42:49+0000 [scrapy] INFO: Enabled extensions: CloseSpider, AnnouncerExtension, TelnetConsole, CoreStats, LogStats, SpiderState
2015-05-13 14:42:49+0000 [boto] DEBUG: Retrieving credentials from metadata server.
2015-05-13 14:42:50+0000 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "/home/scrapinghub/Devel/boto/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2015-05-13 14:42:50+0000 [boto] ERROR: Unable to read instance data, giving up
2015-05-13 14:42:50+0000 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-13 14:42:50+0000 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-05-13 14:42:50+0000 [scrapy] INFO: Enabled item pipelines:
2015-05-13 14:42:50+0000 [scrapy] INFO: Spider opened
2015-05-13 14:42:50+0000 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-05-13 14:42:50+0000 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-05-13 14:42:51+0000 [scrapy] DEBUG: Crawled (200) <GET http://example.com> (referer: None)
2015-05-13 14:42:51+0000 [scrapy] ERROR: Spider error processing <GET http://example.com> (referer: None)
2015-05-13 14:42:51+0000 [scrapy] INFO: Closing spider (finished)
2015-05-13 14:42:51+0000 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 252,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 1569,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 5, 13, 17, 42, 51, 159783),
'log_count/DEBUG': 3,
'log_count/ERROR': 3,
'log_count/INFO': 9,
'log_count/WARNING': 4,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/Exception': 1,
'start_time': datetime.datetime(2015, 5, 13, 17, 42, 50, 699092)}
2015-05-13 14:42:51+0000 [scrapy] INFO: Spider closed (finished)
2015-05-14 11:30:13+0000 [scrapy] INFO: Scrapy 0.25.1 started (bot:)
2015-05-14 11:30:13+0000 [scrapy] INFO: Optional features available: ssl, http11
2015-05-14 11:30:13+0000 [scrapy] INFO: Overridden settings: {}
2015-05-14 11:30:13+0000 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, CoreStats, LogStats, SpiderState
2015-05-14 11:30:13+0000 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-14 11:30:13+0000 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-05-14 11:30:13+0000 [scrapy] INFO: Enabled item pipelines:
2015-05-14 11:30:13+0000 [scrapy] INFO: Spider opened
2015-05-14 11:30:13+0000 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-05-14 11:30:13+0000 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-05-14 11:30:13+0000 [scrapy] DEBUG: Crawled (200) <GET http://example.com> (referer: None)
2015-05-14 11:30:13+0000 [scrapy] ERROR: Spider error processing <GET http://example.com> (referer: None)
2015-05-14 11:30:13+0000 [scrapy] INFO: Closing spider (finished)
2015-05-14 11:30:13+0000 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 210,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 1569,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 5, 14, 14, 30, 13, 596960),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'log_count/WARNING': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/Exception': 1,
'start_time': datetime.datetime(2015, 5, 14, 14, 30, 13, 245717)}
2015-05-14 11:30:13+0000 [scrapy] INFO: Spider closed (finished)
from scrapy.spider import Spider
class TestSpider(Spider):
name = 'test'
start_urls = ['http://example.com']
def parse(self, response):
raise Exception
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment