Skip to content

Instantly share code, notes, and snippets.

@marceloandriolli
Created June 15, 2018 00:05
Show Gist options
  • Save marceloandriolli/81737dfd6b6b15847d081fd47754bb14 to your computer and use it in GitHub Desktop.
Save marceloandriolli/81737dfd6b6b15847d081fd47754bb14 to your computer and use it in GitHub Desktop.
2018-06-14 21:04:37 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: celesc)
2018-06-14 21:04:37 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 18.4.0, Python 2.7.12 (default, Jun 13 2018, 21:52:00) - [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.2.2, Platform Darwin-17.5.0-x86_64-i386-64bit
2018-06-14 21:04:37 [py.warnings] WARNING: /Users/marcelorsa/.pyenv/versions/pague_verde_bots/lib/python2.7/site-packages/scrapy/utils/deprecate.py:156: ScrapyDeprecationWarning: `scrapy.telnet.TelnetConsole` class is deprecated, use `scrapy.extensions.telnet.TelnetConsole` instead
ScrapyDeprecationWarning)
2018-06-14 21:04:37 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2018-06-14 21:04:37 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-06-14 21:04:37 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-06-14 21:04:37 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-06-14 21:04:37 [scrapy.core.engine] INFO: Spider opened
2018-06-14 21:04:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-06-14 21:04:37 [py.warnings] WARNING: /Users/marcelorsa/.pyenv/versions/pague_verde_bots/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py:59: URLWarning: allowed_domains accepts only domains, not URLs. Ignoring URL entry https://agenciaweb.celesc.com.br/AgenciaWeb/autenticar/loginCliente.do in allowed_domains.
warnings.warn("allowed_domains accepts only domains, not URLs. Ignoring URL entry %s in allowed_domains." % domain, URLWarning)
2018-06-14 21:04:37 [py.warnings] WARNING: /Users/marcelorsa/.pyenv/versions/pague_verde_bots/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py:59: URLWarning: allowed_domains accepts only domains, not URLs. Ignoring URL entry https://agenciaweb.celesc.com.br/AgenciaWeb/autenticar/autenticar.do in allowed_domains.
warnings.warn("allowed_domains accepts only domains, not URLs. Ignoring URL entry %s in allowed_domains." % domain, URLWarning)
2018-06-14 21:04:37 [py.warnings] WARNING: /Users/marcelorsa/.pyenv/versions/pague_verde_bots/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py:59: URLWarning: allowed_domains accepts only domains, not URLs. Ignoring URL entry https://agenciaweb.celesc.com.br/AgenciaWeb/consultarHistoricoConsumo/histCons.do in allowed_domains.
warnings.warn("allowed_domains accepts only domains, not URLs. Ignoring URL entry %s in allowed_domains." % domain, URLWarning)
2018-06-14 21:04:37 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://agenciaweb.celesc.com.br:8080/AgenciaWeb/autenticar/loginCliente.do> (failed 1 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_get_record', 'wrong version number')]>]
2018-06-14 21:04:37 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://agenciaweb.celesc.com.br:8080/AgenciaWeb/autenticar/loginCliente.do> (failed 2 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_get_record', 'wrong version number')]>]
2018-06-14 21:04:37 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://agenciaweb.celesc.com.br:8080/AgenciaWeb/autenticar/loginCliente.do> (failed 3 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_get_record', 'wrong version number')]>]
2018-06-14 21:04:37 [scrapy.core.scraper] ERROR: Error downloading <GET https://agenciaweb.celesc.com.br:8080/AgenciaWeb/autenticar/loginCliente.do>: [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl3_get_record', 'wrong version number')]>]
2018-06-14 21:04:37 [scrapy.core.engine] INFO: Closing spider (finished)
2018-06-14 21:04:37 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 3,
'downloader/request_bytes': 828,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 6, 15, 0, 4, 37, 803250),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'log_count/WARNING': 4,
'memusage/max': 44888064,
'memusage/startup': 44888064,
'retry/count': 2,
'retry/max_reached': 1,
'retry/reason_count/twisted.web._newclient.ResponseNeverReceived': 2,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2018, 6, 15, 0, 4, 37, 314362)}
2018-06-14 21:04:37 [scrapy.core.engine] INFO: Spider closed (finished)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment