Created
June 12, 2018 10:23
-
-
Save lag-linaro/a8b90ae9ad81b42a5ac717a378505395 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<I'LL PUT THIS PART AT THE TOP, TO SAVE IT GETTING LOST. IN REALITY I DID THIS AFTER `make`> | |
<SEEMS TO LAUNCH FINE?> | |
$ docker run -it --rm 4986080400a1 | |
[2018-06-08T09:30:20,897][INFO ][o.e.n.Node ] [] initializing ... | |
[2018-06-08T09:30:21,044][INFO ][o.e.e.NodeEnvironment ] [ao2mHEQ] using [1] data paths, mounts [[/ (overlay)]], net usable_space [13.9gb], net total_space [41.5gb], types [overlay] | |
[2018-06-08T09:30:21,044][INFO ][o.e.e.NodeEnvironment ] [ao2mHEQ] heap size [989.8mb], compressed ordinary object pointers [true] | |
[2018-06-08T09:30:21,046][INFO ][o.e.n.Node ] node name [ao2mHEQ] derived from node ID [ao2mHEQpRX-CC1Veutm4yw]; set [node.name] to override | |
[2018-06-08T09:30:21,046][INFO ][o.e.n.Node ] version[6.2.4], pid[1], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/4.15.0-22-generic/aarch64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_171/25.171-b10] | |
[2018-06-08T09:30:21,046][INFO ][o.e.n.Node ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.w9K4jODT, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config] | |
[2018-06-08T09:30:24,536][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [aggs-matrix-stats] | |
[2018-06-08T09:30:24,536][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [analysis-common] | |
[2018-06-08T09:30:24,536][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [ingest-common] | |
[2018-06-08T09:30:24,536][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [lang-expression] | |
[2018-06-08T09:30:24,536][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [lang-mustache] | |
[2018-06-08T09:30:24,536][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [lang-painless] | |
[2018-06-08T09:30:24,536][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [mapper-extras] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [parent-join] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [percolator] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [rank-eval] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [reindex] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [repository-url] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [transport-netty4] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded module [tribe] | |
[2018-06-08T09:30:24,537][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [ingest-geoip] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [ingest-user-agent] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-core] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-deprecation] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-graph] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-logstash] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-ml] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-monitoring] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-security] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-upgrade] | |
[2018-06-08T09:30:24,538][INFO ][o.e.p.PluginsService ] [ao2mHEQ] loaded plugin [x-pack-watcher] | |
[2018-06-08T09:30:30,747][INFO ][o.e.d.DiscoveryModule ] [ao2mHEQ] using discovery type [zen] | |
[2018-06-08T09:30:31,622][INFO ][o.e.n.Node ] initialized | |
[2018-06-08T09:30:31,622][INFO ][o.e.n.Node ] [ao2mHEQ] starting ... | |
[2018-06-08T09:30:31,894][INFO ][o.e.t.TransportService ] [ao2mHEQ] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300} | |
[2018-06-08T09:30:31,914][INFO ][o.e.b.BootstrapChecks ] [ao2mHEQ] bound or publishing to a non-loopback address, enforcing bootstrap checks | |
[2018-06-08T09:30:34,987][INFO ][o.e.c.s.MasterService ] [ao2mHEQ] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {ao2mHEQ}{ao2mHEQpRX-CC1Veutm4yw}{Pd02UrK3TQaQ8qy8pDfqOw}{172.17.0.2}{172.17.0.2:9300} | |
[2018-06-08T09:30:34,995][INFO ][o.e.c.s.ClusterApplierService] [ao2mHEQ] new_master {ao2mHEQ}{ao2mHEQpRX-CC1Veutm4yw}{Pd02UrK3TQaQ8qy8pDfqOw}{172.17.0.2}{172.17.0.2:9300}, reason: apply cluster state (from master [master {ao2mHEQ}{ao2mHEQpRX-CC1Veutm4yw}{Pd02UrK3TQaQ8qy8pDfqOw}{172.17.0.2}{172.17.0.2:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]]) | |
[2018-06-08T09:30:35,059][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [ao2mHEQ] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200} | |
[2018-06-08T09:30:35,059][INFO ][o.e.n.Node ] [ao2mHEQ] started | |
[2018-06-08T09:30:35,228][INFO ][o.e.g.GatewayService ] [ao2mHEQ] recovered [0] indices into cluster_state | |
[2018-06-08T09:30:36,184][INFO ][o.e.l.LicenseService ] [ao2mHEQ] license [cf3437ad-0bf9-4893-9bae-f0faeed2589c] mode [trial] - valid | |
[2018-06-08T09:30:41,833][INFO ][o.e.c.m.MetaDataCreateIndexService] [ao2mHEQ] [.monitoring-es-6-2018.06.08] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0], mappings [doc] | |
[2018-06-08T09:30:42,191][INFO ][o.e.c.m.MetaDataCreateIndexService] [ao2mHEQ] [.watches] creating index, cause [auto(bulk api)], templates [.watches], shards [1]/[0], mappings [doc] | |
[2018-06-08T09:30:42,648][INFO ][o.e.c.r.a.AllocationService] [ao2mHEQ] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-6-2018.06.08][0], [.watches][0]] ...]). | |
[2018-06-08T09:30:42,762][INFO ][o.e.x.w.WatcherService ] [ao2mHEQ] paused watch execution, reason [new local watcher shard allocation ids], cancelled [0] queued tasks | |
[2018-06-08T09:30:42,829][INFO ][o.e.c.m.MetaDataMappingService] [ao2mHEQ] [.watches/EqOPHIOMTKmh7LdqE5bnfQ] update_mapping [doc] | |
[2018-06-08T09:30:42,865][INFO ][o.e.c.m.MetaDataMappingService] [ao2mHEQ] [.watches/EqOPHIOMTKmh7LdqE5bnfQ] update_mapping [doc] | |
^[f[2018-06-08T09:31:43,335][INFO ][o.e.c.m.MetaDataCreateIndexService] [ao2mHEQ] [.triggered_watches] creating index, cause [auto(bulk api)], templates [.triggered_watches], shards [1]/[1], mappings [doc] | |
[2018-06-08T09:31:43,511][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [ao2mHEQ] updating number_of_replicas to [0] for indices [.triggered_watches] | |
[2018-06-08T09:31:43,522][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [ao2mHEQ] [.triggered_watches/bfbd_d7KRN6cxHO6A5N6GQ] auto expanded replicas to [0] | |
[2018-06-08T09:31:43,700][INFO ][o.e.c.r.a.AllocationService] [ao2mHEQ] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.triggered_watches][0]] ...]). | |
[2018-06-08T09:31:44,161][INFO ][o.e.c.m.MetaDataCreateIndexService] [ao2mHEQ] [.watcher-history-7-2018.06.08] creating index, cause [auto(bulk api)], templates [.watch-history-7], shards [1]/[0], mappings [doc] | |
[2018-06-08T09:31:44,467][INFO ][o.e.c.r.a.AllocationService] [ao2mHEQ] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.watcher-history-7-2018.06.08][0]] ...]). | |
[2018-06-08T09:31:44,575][INFO ][o.e.c.m.MetaDataMappingService] [ao2mHEQ] [.watcher-history-7-2018.06.08/eaV8wjkUSy6wiVBZCtuL9A] update_mapping [doc] | |
[2018-06-08T09:31:44,728][INFO ][o.e.c.m.MetaDataMappingService] [ao2mHEQ] [.watcher-history-7-2018.06.08/eaV8wjkUSy6wiVBZCtuL9A] update_mapping [doc] | |
linaro@ubuntu:~/projects/docker-hub/elasticsearch-docker-new [6.2]$ git checkout -b 6.2 | |
fatal: A branch named '6.2' already exists. | |
linaro@ubuntu:~/projects/docker-hub/elasticsearch-docker-new [6.2]$ sudo sysctl -w vm.max_map_count=262144 | |
[sudo] password for linaro: | |
vm.max_map_count = 262144 | |
linaro@ubuntu:~/projects/docker-hub/elasticsearch-docker-new [6.2]$ git diff | cat | |
diff --git a/templates/Dockerfile.j2 b/templates/Dockerfile.j2 | |
index 5cf3857..df8b55d 100644 | |
--- a/templates/Dockerfile.j2 | |
+++ b/templates/Dockerfile.j2 | |
@@ -86,6 +86,8 @@ RUN echo 'xpack.license.self_generated.type: trial' >>config/elasticsearch.yml | |
RUN echo 'xpack.license.self_generated.type: basic' >>config/elasticsearch.yml | |
{% endif -%} | |
+RUN echo 'xpack.ml.enabled: false' >>config/elasticsearch.yml | |
+ | |
USER 0 | |
# Set gid to 0 for elasticsearch and make group permission similar to that of user | |
linaro@ubuntu:~/projects/docker-hub/elasticsearch-docker-new [6.2]$ ELASTIC_VERSION=6.2.4 make | |
if [[ -f "docker-compose-oss.yml" ]]; then docker-compose -f docker-compose-oss.yml down && docker-compose -f docker-compose-oss.yml rm -f -v; fi; rm -f docker-compose-oss.yml; rm -f tests/docker-compose-oss.yml; rm -f build/elasticsearch/Dockerfile-oss; if [[ -f "docker-compose-basic.yml" ]]; then docker-compose -f docker-compose-basic.yml down && docker-compose -f docker-compose-basic.yml rm -f -v; fi; rm -f docker-compose-basic.yml; rm -f tests/docker-compose-basic.yml; rm -f build/elasticsearch/Dockerfile-basic; if [[ -f "docker-compose-platinum.yml" ]]; then docker-compose -f docker-compose-platinum.yml down && docker-compose -f docker-compose-platinum.yml rm -f -v; fi; rm -f docker-compose-platinum.yml; rm -f tests/docker-compose-platinum.yml; rm -f build/elasticsearch/Dockerfile-platinum; | |
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string. | |
Removing network elasticsearchdockernew_esnet | |
WARNING: Network elasticsearchdockernew_esnet not found. | |
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string. | |
No stopped containers | |
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string. | |
Removing network elasticsearchdockernew_esnet | |
WARNING: Network elasticsearchdockernew_esnet not found. | |
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string. | |
No stopped containers | |
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string. | |
Removing network elasticsearchdockernew_esnet | |
WARNING: Network elasticsearchdockernew_esnet not found. | |
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string. | |
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string. | |
No stopped containers | |
jinja2 -D elastic_version='6.2.4' -D staging_build_num='' -D artifacts_dir='' -D image_flavor='oss' templates/Dockerfile.j2 > build/elasticsearch/Dockerfile-oss; jinja2 -D elastic_version='6.2.4' -D staging_build_num='' -D artifacts_dir='' -D image_flavor='basic' templates/Dockerfile.j2 > build/elasticsearch/Dockerfile-basic; jinja2 -D elastic_version='6.2.4' -D staging_build_num='' -D artifacts_dir='' -D image_flavor='platinum' templates/Dockerfile.j2 > build/elasticsearch/Dockerfile-platinum; | |
pyfiglet -f puffy -w 160 "Building: oss"; docker build -t docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4 -f build/elasticsearch/Dockerfile-oss build/elasticsearch; if [[ oss == basic ]]; then docker tag docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4 docker.elastic.co/elasticsearch/elasticsearch:6.2.4; fi; pyfiglet -f puffy -w 160 "Building: basic"; docker build -t docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.4 -f build/elasticsearch/Dockerfile-basic build/elasticsearch; if [[ basic == basic ]]; then docker tag docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.4 docker.elastic.co/elasticsearch/elasticsearch:6.2.4; fi; pyfiglet -f puffy -w 160 "Building: platinum"; docker build -t docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4 -f build/elasticsearch/Dockerfile-platinum build/elasticsearch; if [[ platinum == basic ]]; then docker tag docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4 docker.elastic.co/elasticsearch/elasticsearch:6.2.4; fi; | |
___ _ _ | |
( _`\ _ (_ ) ( ) _ | |
| (_) ) _ _ (_) | | _| |(_) ___ __ _ _ ___ ___ | |
| _ <'( ) ( )| | | | /'_` || |/' _ `\ /'_ `\(_) /'_`\ /',__)/',__) | |
| (_) )| (_) || | | | ( (_| || || ( ) |( (_) | _ ( (_) )\__, \\__, \ | |
(____/'`\___/'(_)(___)`\__,_)(_)(_) (_)`\__ |(_) `\___/'(____/(____/ | |
( )_) | | |
\___/' | |
Sending build context to Docker daemon 27.65kB | |
Step 1/28 : FROM centos:7 AS prep_es_files | |
---> 5f65840122d0 | |
Step 2/28 : ENV PATH /usr/share/elasticsearch/bin:$PATH | |
---> Using cache | |
---> 5654db35b6ae | |
Step 3/28 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk | |
---> Using cache | |
---> 362271201b29 | |
Step 4/28 : RUN yum install -y java-1.8.0-openjdk-headless unzip which | |
---> Using cache | |
---> 8d0012f3d9a0 | |
Step 5/28 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -d /usr/share/elasticsearch elasticsearch | |
---> Using cache | |
---> 3294f301a6f4 | |
Step 6/28 : WORKDIR /usr/share/elasticsearch | |
---> Using cache | |
---> 15bf8f40105d | |
Step 7/28 : USER 1000 | |
---> Using cache | |
---> bab9e7d09e01 | |
Step 8/28 : RUN curl -fsSL https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz | tar zx --strip-components=1 | |
---> Using cache | |
---> 9b957496565b | |
Step 9/28 : RUN set -ex && for esdirs in config data logs; do mkdir -p "$esdirs"; done | |
---> Using cache | |
---> 7552ffa6d834 | |
Step 10/28 : RUN for PLUGIN in ingest-user-agent ingest-geoip; do \elasticsearch-plugin install --batch "$PLUGIN"; done | |
---> Using cache | |
---> ae2b7b9a7988 | |
Step 11/28 : COPY --chown=1000:0 elasticsearch.yml log4j2.properties config/ | |
---> Using cache | |
---> a9f24c717cd1 | |
Step 12/28 : RUN echo 'xpack.ml.enabled: false' >>config/elasticsearch.yml | |
---> Running in 01213042b959 | |
Removing intermediate container 01213042b959 | |
---> 78c035c9e1e7 | |
Step 13/28 : USER 0 | |
---> Running in b3764de708df | |
Removing intermediate container b3764de708df | |
---> 96603176da19 | |
Step 14/28 : RUN chown -R elasticsearch:0 . && chmod -R g=u /usr/share/elasticsearch | |
---> Running in 2aa207443b2d | |
Removing intermediate container 2aa207443b2d | |
---> 1e7c41b74f78 | |
Step 15/28 : FROM centos:7 | |
---> 5f65840122d0 | |
Step 16/28 : LABEL maintainer "Elastic Docker Team <docker@elastic.co>" | |
---> Using cache | |
---> de3f8cd76e43 | |
Step 17/28 : ENV ELASTIC_CONTAINER true | |
---> Using cache | |
---> 80b9d3d06e52 | |
Step 18/28 : ENV PATH /usr/share/elasticsearch/bin:$PATH | |
---> Using cache | |
---> 448ee7a3cf01 | |
Step 19/28 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk | |
---> Using cache | |
---> 63b6fba973ab | |
Step 20/28 : RUN yum update -y && yum install -y nc java-1.8.0-openjdk-headless unzip wget which && yum clean all | |
---> Using cache | |
---> f82b703d0457 | |
Step 21/28 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -G 0 -d /usr/share/elasticsearch elasticsearch && chmod 0775 /usr/share/elasticsearch && chgrp 0 /usr/share/elasticsearch | |
---> Using cache | |
---> 231b1d56e5cf | |
Step 22/28 : WORKDIR /usr/share/elasticsearch | |
---> Using cache | |
---> 9512b2334011 | |
Step 23/28 : COPY --from=prep_es_files --chown=1000:0 /usr/share/elasticsearch /usr/share/elasticsearch | |
---> d847d5d490e7 | |
Step 24/28 : COPY --chown=1000:0 bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh | |
---> 463d9b21d1a1 | |
Step 25/28 : RUN chgrp 0 /usr/local/bin/docker-entrypoint.sh && chmod g=u /etc/passwd && chmod 0775 /usr/local/bin/docker-entrypoint.sh | |
---> Running in ed1fffd1eda8 | |
Removing intermediate container ed1fffd1eda8 | |
---> 3d680db63698 | |
Step 26/28 : EXPOSE 9200 9300 | |
---> Running in 3bdf217d2731 | |
Removing intermediate container 3bdf217d2731 | |
---> 15b82a0253ef | |
Step 27/28 : ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] | |
---> Running in 2f01fb189461 | |
Removing intermediate container 2f01fb189461 | |
---> b02aa0219ce0 | |
Step 28/28 : CMD ["eswrapper"] | |
---> Running in eadefd23f9a9 | |
Removing intermediate container eadefd23f9a9 | |
---> 79e07c06dead | |
Successfully built 79e07c06dead | |
Successfully tagged docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4 | |
___ _ _ _ | |
( _`\ _ (_ ) ( ) _ ( ) _ | |
| (_) ) _ _ (_) | | _| |(_) ___ __ _ | |_ _ _ ___ (_) ___ | |
| _ <'( ) ( )| | | | /'_` || |/' _ `\ /'_ `\(_) | '_`\ /'_` )/',__)| | /'___) | |
| (_) )| (_) || | | | ( (_| || || ( ) |( (_) | _ | |_) )( (_| |\__, \| |( (___ | |
(____/'`\___/'(_)(___)`\__,_)(_)(_) (_)`\__ |(_) (_,__/'`\__,_)(____/(_)`\____) | |
( )_) | | |
\___/' | |
Sending build context to Docker daemon 27.65kB | |
Step 1/29 : FROM centos:7 AS prep_es_files | |
---> 5f65840122d0 | |
Step 2/29 : ENV PATH /usr/share/elasticsearch/bin:$PATH | |
---> Using cache | |
---> 5654db35b6ae | |
Step 3/29 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk | |
---> Using cache | |
---> 362271201b29 | |
Step 4/29 : RUN yum install -y java-1.8.0-openjdk-headless unzip which | |
---> Using cache | |
---> 8d0012f3d9a0 | |
Step 5/29 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -d /usr/share/elasticsearch elasticsearch | |
---> Using cache | |
---> 3294f301a6f4 | |
Step 6/29 : WORKDIR /usr/share/elasticsearch | |
---> Using cache | |
---> 15bf8f40105d | |
Step 7/29 : USER 1000 | |
---> Using cache | |
---> bab9e7d09e01 | |
Step 8/29 : RUN curl -fsSL https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz | tar zx --strip-components=1 | |
---> Using cache | |
---> 9b957496565b | |
Step 9/29 : RUN set -ex && for esdirs in config data logs; do mkdir -p "$esdirs"; done | |
---> Using cache | |
---> 7552ffa6d834 | |
Step 10/29 : RUN for PLUGIN in x-pack ingest-user-agent ingest-geoip; do elasticsearch-plugin install --batch "$PLUGIN"; done | |
---> Using cache | |
---> 3fc05fe80eac | |
Step 11/29 : COPY --chown=1000:0 elasticsearch.yml log4j2.properties config/ | |
---> Using cache | |
---> 086b7ae37230 | |
Step 12/29 : RUN echo 'xpack.license.self_generated.type: basic' >>config/elasticsearch.yml | |
---> Using cache | |
---> 1adc4c16937e | |
Step 13/29 : RUN echo 'xpack.ml.enabled: false' >>config/elasticsearch.yml | |
---> Running in 111d70637f84 | |
Removing intermediate container 111d70637f84 | |
---> 25d5c4f2cf93 | |
Step 14/29 : USER 0 | |
---> Running in c80a62b9f521 | |
Removing intermediate container c80a62b9f521 | |
---> e299533a6cdc | |
Step 15/29 : RUN chown -R elasticsearch:0 . && chmod -R g=u /usr/share/elasticsearch | |
---> Running in 43aa6639afea | |
Removing intermediate container 43aa6639afea | |
---> 027a02af2213 | |
Step 16/29 : FROM centos:7 | |
---> 5f65840122d0 | |
Step 17/29 : LABEL maintainer "Elastic Docker Team <docker@elastic.co>" | |
---> Using cache | |
---> de3f8cd76e43 | |
Step 18/29 : ENV ELASTIC_CONTAINER true | |
---> Using cache | |
---> 80b9d3d06e52 | |
Step 19/29 : ENV PATH /usr/share/elasticsearch/bin:$PATH | |
---> Using cache | |
---> 448ee7a3cf01 | |
Step 20/29 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk | |
---> Using cache | |
---> 63b6fba973ab | |
Step 21/29 : RUN yum update -y && yum install -y nc java-1.8.0-openjdk-headless unzip wget which && yum clean all | |
---> Using cache | |
---> f82b703d0457 | |
Step 22/29 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -G 0 -d /usr/share/elasticsearch elasticsearch && chmod 0775 /usr/share/elasticsearch && chgrp 0 /usr/share/elasticsearch | |
---> Using cache | |
---> 231b1d56e5cf | |
Step 23/29 : WORKDIR /usr/share/elasticsearch | |
---> Using cache | |
---> 9512b2334011 | |
Step 24/29 : COPY --from=prep_es_files --chown=1000:0 /usr/share/elasticsearch /usr/share/elasticsearch | |
---> 00949dd392a3 | |
Step 25/29 : COPY --chown=1000:0 bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh | |
---> e4f47ce8f390 | |
Step 26/29 : RUN chgrp 0 /usr/local/bin/docker-entrypoint.sh && chmod g=u /etc/passwd && chmod 0775 /usr/local/bin/docker-entrypoint.sh | |
---> Running in 5002d4d79277 | |
Removing intermediate container 5002d4d79277 | |
---> ddd656b8962c | |
Step 27/29 : EXPOSE 9200 9300 | |
---> Running in a81821c98674 | |
Removing intermediate container a81821c98674 | |
---> 8e0df2e5b739 | |
Step 28/29 : ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] | |
---> Running in 24985bfcbfa4 | |
Removing intermediate container 24985bfcbfa4 | |
---> 38fa6133e947 | |
Step 29/29 : CMD ["eswrapper"] | |
---> Running in 07225f17d16b | |
Removing intermediate container 07225f17d16b | |
---> 2d6452594eec | |
Successfully built 2d6452594eec | |
Successfully tagged docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.4 | |
___ _ _ _ _ | |
( _`\ _ (_ ) ( ) _ (_ ) ( )_ _ | |
| (_) ) _ _ (_) | | _| |(_) ___ __ _ _ _ | | _ _ | ,_)(_) ___ _ _ ___ ___ | |
| _ <'( ) ( )| | | | /'_` || |/' _ `\ /'_ `\(_) ( '_`\ | | /'_` )| | | |/' _ `\( ) ( )/' _ ` _ `\ | |
| (_) )| (_) || | | | ( (_| || || ( ) |( (_) | _ | (_) ) | | ( (_| || |_ | || ( ) || (_) || ( ) ( ) | | |
(____/'`\___/'(_)(___)`\__,_)(_)(_) (_)`\__ |(_) | ,__/'(___)`\__,_)`\__)(_)(_) (_)`\___/'(_) (_) (_) | |
( )_) | | | | |
\___/' (_) | |
Sending build context to Docker daemon 27.65kB | |
Step 1/30 : FROM centos:7 AS prep_es_files | |
---> 5f65840122d0 | |
Step 2/30 : ENV PATH /usr/share/elasticsearch/bin:$PATH | |
---> Using cache | |
---> 5654db35b6ae | |
Step 3/30 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk | |
---> Using cache | |
---> 362271201b29 | |
Step 4/30 : RUN yum install -y java-1.8.0-openjdk-headless unzip which | |
---> Using cache | |
---> 8d0012f3d9a0 | |
Step 5/30 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -d /usr/share/elasticsearch elasticsearch | |
---> Using cache | |
---> 3294f301a6f4 | |
Step 6/30 : WORKDIR /usr/share/elasticsearch | |
---> Using cache | |
---> 15bf8f40105d | |
Step 7/30 : USER 1000 | |
---> Using cache | |
---> bab9e7d09e01 | |
Step 8/30 : RUN curl -fsSL https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz | tar zx --strip-components=1 | |
---> Using cache | |
---> 9b957496565b | |
Step 9/30 : RUN set -ex && for esdirs in config data logs; do mkdir -p "$esdirs"; done | |
---> Using cache | |
---> 7552ffa6d834 | |
Step 10/30 : RUN for PLUGIN in x-pack ingest-user-agent ingest-geoip; do elasticsearch-plugin install --batch "$PLUGIN"; done | |
---> Using cache | |
---> 3fc05fe80eac | |
Step 11/30 : COPY --chown=1000:0 elasticsearch.yml log4j2.properties config/ | |
---> Using cache | |
---> 086b7ae37230 | |
Step 12/30 : COPY --chown=1000:0 x-pack/log4j2.properties config/x-pack/ | |
---> Using cache | |
---> c829aa89671d | |
Step 13/30 : RUN echo 'xpack.license.self_generated.type: trial' >>config/elasticsearch.yml | |
---> Using cache | |
---> 05287eea4c86 | |
Step 14/30 : RUN echo 'xpack.ml.enabled: false' >>config/elasticsearch.yml | |
---> Running in fe508eb73a1d | |
Removing intermediate container fe508eb73a1d | |
---> fab47213cbbf | |
Step 15/30 : USER 0 | |
---> Running in e0f959d5ba37 | |
Removing intermediate container e0f959d5ba37 | |
---> dcbddc6b7e0d | |
Step 16/30 : RUN chown -R elasticsearch:0 . && chmod -R g=u /usr/share/elasticsearch | |
---> Running in b64832b0ec73 | |
Removing intermediate container b64832b0ec73 | |
---> 016cb47d0773 | |
Step 17/30 : FROM centos:7 | |
---> 5f65840122d0 | |
Step 18/30 : LABEL maintainer "Elastic Docker Team <docker@elastic.co>" | |
---> Using cache | |
---> de3f8cd76e43 | |
Step 19/30 : ENV ELASTIC_CONTAINER true | |
---> Using cache | |
---> 80b9d3d06e52 | |
Step 20/30 : ENV PATH /usr/share/elasticsearch/bin:$PATH | |
---> Using cache | |
---> 448ee7a3cf01 | |
Step 21/30 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk | |
---> Using cache | |
---> 63b6fba973ab | |
Step 22/30 : RUN yum update -y && yum install -y nc java-1.8.0-openjdk-headless unzip wget which && yum clean all | |
---> Using cache | |
---> f82b703d0457 | |
Step 23/30 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -G 0 -d /usr/share/elasticsearch elasticsearch && chmod 0775 /usr/share/elasticsearch && chgrp 0 /usr/share/elasticsearch | |
---> Using cache | |
---> 231b1d56e5cf | |
Step 24/30 : WORKDIR /usr/share/elasticsearch | |
---> Using cache | |
---> 9512b2334011 | |
Step 25/30 : COPY --from=prep_es_files --chown=1000:0 /usr/share/elasticsearch /usr/share/elasticsearch | |
---> ba0a48c656e8 | |
Step 26/30 : COPY --chown=1000:0 bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh | |
---> 82985222210a | |
Step 27/30 : RUN chgrp 0 /usr/local/bin/docker-entrypoint.sh && chmod g=u /etc/passwd && chmod 0775 /usr/local/bin/docker-entrypoint.sh | |
---> Running in d4a7b44699cc | |
Removing intermediate container d4a7b44699cc | |
---> 60c382ec2688 | |
Step 28/30 : EXPOSE 9200 9300 | |
---> Running in 359a238d9937 | |
Removing intermediate container 359a238d9937 | |
---> 93e760d637af | |
Step 29/30 : ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] | |
---> Running in 0064e961320c | |
Removing intermediate container 0064e961320c | |
---> a81592b7c58d | |
Step 30/30 : CMD ["eswrapper"] | |
---> Running in ca184c8e230b | |
Removing intermediate container ca184c8e230b | |
---> 4986080400a1 | |
Successfully built 4986080400a1 | |
Successfully tagged docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4 | |
flake8 tests | |
jinja2 -D elastic_registry='docker.elastic.co' -D version_tag='6.2.4' -D image_flavor='oss' templates/docker-compose.yml.j2 > docker-compose-oss.yml; jinja2 -D image_flavor='oss' templates/docker-compose-fragment.yml.j2 > tests/docker-compose-oss.yml; jinja2 -D elastic_registry='docker.elastic.co' -D version_tag='6.2.4' -D image_flavor='basic' templates/docker-compose.yml.j2 > docker-compose-basic.yml; jinja2 -D image_flavor='basic' templates/docker-compose-fragment.yml.j2 > tests/docker-compose-basic.yml; jinja2 -D elastic_registry='docker.elastic.co' -D version_tag='6.2.4' -D image_flavor='platinum' templates/docker-compose.yml.j2 > docker-compose-platinum.yml; jinja2 -D image_flavor='platinum' templates/docker-compose-fragment.yml.j2 > tests/docker-compose-platinum.yml; | |
docker run --rm -v "/home/linaro/projects/docker-hub/elasticsearch-docker-new:/mnt" bash rm -rf /mnt/tests/datadir1 /mnt/tests/datadir2 | |
pyfiglet -w 160 -f puffy "test: oss single"; ./bin/pytest --image-flavor=oss --single-node tests; pyfiglet -w 160 -f puffy "test: oss multi"; ./bin/pytest --image-flavor=oss tests; pyfiglet -w 160 -f puffy "test: basic single"; ./bin/pytest --image-flavor=basic --single-node tests; pyfiglet -w 160 -f puffy "test: basic multi"; ./bin/pytest --image-flavor=basic tests; pyfiglet -w 160 -f puffy "test: platinum single"; ./bin/pytest --image-flavor=platinum --single-node tests; pyfiglet -w 160 -f puffy "test: platinum multi"; ./bin/pytest --image-flavor=platinum tests; | |
_ _ _ | |
( )_ ( )_ _ (_ ) | |
| ,_) __ ___ | ,_) _ _ ___ ___ ___ (_) ___ __ | | __ | |
| | /'__`\/',__)| | (_) /'_`\ /',__)/',__) /',__)| |/' _ `\ /'_ `\ | | /'__`\ | |
| |_ ( ___/\__, \| |_ _ ( (_) )\__, \\__, \ \__, \| || ( ) |( (_) | | | ( ___/ | |
`\__)`\____)(____/`\__)(_) `\___/'(____/(____/ (____/(_)(_) (_)`\__ |(___)`\____) | |
( )_) | | |
\___/' | |
Creating network "elasticsearchdockernew_esnet" with driver "bridge" | |
Creating volume "elasticsearchdockernew_esdata1" with local driver | |
Creating volume "elasticsearchdockernew_esdata2" with local driver | |
Creating elasticsearch1 | |
========================================= test session starts ========================================== | |
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0 -- /home/linaro/projects/docker-hub/elasticsearch-docker-new/venv/bin/python3.6 | |
cachedir: .pytest_cache | |
rootdir: /home/linaro/projects/docker-hub/elasticsearch-docker-new, inifile: | |
plugins: testinfra-1.6.0 | |
collected 32 items | |
tests/test_base_os.py::test_base_os[docker://elasticsearch1] PASSED [ 3%] | |
tests/test_base_os.py::test_java_home_env_var[docker://elasticsearch1] PASSED [ 6%] | |
tests/test_base_os.py::test_no_core_files_exist_in_root[docker://elasticsearch1] PASSED [ 9%] | |
tests/test_base_os.py::test_all_elasticsearch_files_are_gid_0[docker://elasticsearch1] PASSED [ 12%] | |
tests/test_datadirs.py::test_es_can_write_to_bind_mounted_datadir[docker://elasticsearch1] ERROR [ 15%] | |
tests/test_datadirs.py::test_es_can_write_to_bind_mounted_datadir_with_different_uid[docker://elasticsearch1] ERROR [ 18%] | |
tests/test_datadirs.py::test_es_can_run_with_random_uid_and_write_to_bind_mounted_datadir[docker://elasticsearch1] ERROR [ 21%] | |
tests/test_es_plugins.py::test_uninstall_xpack_plugin[docker://elasticsearch1] SKIPPED [ 25%] | |
tests/test_es_plugins.py::test_IngestUserAgentPlugin_is_installed[docker://elasticsearch1] ERROR [ 28%] | |
tests/test_es_plugins.py::test_IngestGeoIpPlugin_is_installed[docker://elasticsearch1] ERROR [ 31%] | |
tests/test_logging.py::test_elasticsearch_logs_are_in_docker_logs[docker://elasticsearch1] ERROR [ 34%] | |
tests/test_logging.py::test_security_audit_logs_are_in_docker_logs[docker://elasticsearch1] SKIPPED [ 37%] | |
tests/test_logging.py::test_info_level_logs_are_in_docker_logs[docker://elasticsearch1] ERROR [ 40%] | |
tests/test_process.py::test_process_is_pid_1[docker://elasticsearch1] ERROR [ 43%] | |
tests/test_process.py::test_process_is_running_as_the_correct_user[docker://elasticsearch1] ERROR [ 46%] | |
tests/test_process.py::test_process_is_running_the_correct_version[docker://elasticsearch1] ERROR [ 50%] | |
tests/test_settings.py::test_setting_node_name_with_an_environment_variable[docker://elasticsearch1] ERROR [ 53%] | |
tests/test_settings.py::test_setting_cluster_name_with_an_environment_variable[docker://elasticsearch1] ^C | |
ERROR [ 56%] | |
tests/test_settings.py::test_setting_heapsize_with_an_environment_variable[docker://elasticsearch1] ERROR [ 59%] | |
tests/test_settings.py::test_parameter_containing_underscore_with_an_environment_variable[docker://elasticsearch1] ERROR [ 62%] | |
tests/test_settings.py::test_envar_not_including_a_dot_is_not_presented_to_elasticsearch[docker://elasticsearch1] ERROR [ 65%] | |
tests/test_settings.py::test_capitalized_envvar_is_not_presented_to_elasticsearch[docker://elasticsearch1] ERROR [ 68%] | |
tests/test_settings.py::test_setting_boostrap_memory_lock_with_an_environment_variable[docker://elasticsearch1] ERROR [ 71%] | |
tests/test_user.py::test_group_properties[docker://elasticsearch1] ERROR [ 75%] | |
tests/test_user.py::test_user_properties[docker://elasticsearch1] ERROR [ 78%] | |
tests/test_xpack_basic_index_crud.py::test_bootstrap_password_change[docker://elasticsearch1] SKIPPED [ 81%] | |
tests/test_xpack_basic_index_crud.py::test_create_index[docker://elasticsearch1] ERROR [ 84%] | |
tests/test_xpack_basic_index_crud.py::test_search[docker://elasticsearch1] ERROR [ 87%] | |
tests/test_xpack_basic_index_crud.py::test_delete_index[docker://elasticsearch1] ERROR [ 90%] | |
tests/test_xpack_basic_index_crud.py::test_search_on_nonexistent_index_fails[docker://elasticsearch1] ERROR [ 93%] | |
tests/test_xpack_basic_index_crud.py::test_cluster_is_healthy_after_indexing_data[docker://elasticsearch1] ERROR [ 96%] | |
tests/test_xpack_basic_index_crud.py::test_cgroup_os_stats_are_available[docker://elasticsearch1] ERROR [100%] | |
================================================ ERRORS ================================================ | |
_________ ERROR at setup of test_es_can_write_to_bind_mounted_datadir[docker://elasticsearch1] _________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab9e9668> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9812b0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9e9780> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab9e9668> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9e9780>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9e9898> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab981128>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9812b0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab9e9668> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9812b0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9e9780> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab9e9668> | |
_stacktrace = <traceback object at 0xffffab99c788> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab981128>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab9812b0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab9e9390>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
ERROR at setup of test_es_can_write_to_bind_mounted_datadir_with_different_uid[docker://elasticsearch1] | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab84ef28> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab833198>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab84eb38> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab84ef28> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab84eb38>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab84e668> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab8330b8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab833198> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab84ef28> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab833198>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab84eb38> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab84ef28> | |
_stacktrace = <traceback object at 0xffffab69cb48> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab8330b8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab833198> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab84e748>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
ERROR at setup of test_es_can_run_with_random_uid_and_write_to_bind_mounted_datadir[docker://elasticsearch1] | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab607780> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6076a0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab607908> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab607780> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab607908>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6077b8> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab860be0>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6076a0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab607780> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6076a0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab607908> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab607780> | |
_stacktrace = <traceback object at 0xffffab60c1c8> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab860be0>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6076a0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab607048>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
__________ ERROR at setup of test_IngestUserAgentPlugin_is_installed[docker://elasticsearch1] __________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab6ced30> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6ce5c0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab680908> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab6ced30> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab680908>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab680048> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab6cea58>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6ce5c0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab6ced30> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6ce5c0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab680908> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab6ced30> | |
_stacktrace = <traceback object at 0xffffab67ac88> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab6cea58>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6ce5c0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab680da0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
____________ ERROR at setup of test_IngestGeoIpPlugin_is_installed[docker://elasticsearch1] ____________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab56c278> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c668>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c358> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab56c278> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c358>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c400> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab56c0b8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c668> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab56c278> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c668>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c358> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab56c278> | |
_stacktrace = <traceback object at 0xffffab56df08> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab56c0b8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab56c668> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab56c518>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
________ ERROR at setup of test_elasticsearch_logs_are_in_docker_logs[docker://elasticsearch1] _________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab7b8978> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7ac5c0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7b8cc0> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab7b8978> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7b8cc0>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7b8a58> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab7ac630>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7ac5c0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab7b8978> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7ac5c0>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7b8cc0> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab7b8978> | |
_stacktrace = <traceback object at 0xffffab6dc908> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab7ac630>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab7ac5c0> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab7b8a90>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
__________ ERROR at setup of test_info_level_logs_are_in_docker_logs[docker://elasticsearch1] __________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4c7320> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3ec3c8>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab4c72b0> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4c7320> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab4c72b0>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab4c7eb8> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3ec240>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3ec3c8> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4c7320> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3ec3c8>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab4c72b0> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4c7320> | |
_stacktrace = <traceback object at 0xffffab49fe88> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3ec240>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3ec3c8> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4c7dd8>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
___________________ ERROR at setup of test_process_is_pid_1[docker://elasticsearch1] ___________________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab3eb0b8> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb748>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb080> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab3eb0b8> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb080>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb208> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3eb4a8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb748> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab3eb0b8> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb748>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb080> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab3eb0b8> | |
_stacktrace = <traceback object at 0xffffab61eb48> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3eb4a8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3eb748> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3eb2b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
________ ERROR at setup of test_process_is_running_as_the_correct_user[docker://elasticsearch1] ________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4347b8> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab372278>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab434908> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4347b8> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab434908>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab434cf8> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3720f0>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab372278> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4347b8> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab372278>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab434908> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab4347b8> | |
_stacktrace = <traceback object at 0xffffab6f5ac8> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3720f0>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab372278> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab4349b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
________ ERROR at setup of test_process_is_running_the_correct_version[docker://elasticsearch1] ________ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab37d978> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37dba8>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37dd68> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab37d978> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37dd68>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37db70> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab37dc18>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37dba8> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab37d978> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37dba8>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37dd68> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab37d978> | |
_stacktrace = <traceback object at 0xffffab447308> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab37dc18>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab37dba8> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab37de48>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
____ ERROR at setup of test_setting_node_name_with_an_environment_variable[docker://elasticsearch1] ____ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab42fcf8> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab337208>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab42fb70> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab42fcf8> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab42fb70>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab42fcc0> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab39ca58>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab337208> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab42fcf8> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab337208>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab42fb70> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab42fcf8> | |
_stacktrace = <traceback object at 0xffffab39b608> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab39ca58>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab337208> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab42f160>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
__ ERROR at setup of test_setting_cluster_name_with_an_environment_variable[docker://elasticsearch1] ___ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab655908> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655d68>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655048> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab655908> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655048>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655898> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab6559e8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655d68> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab655908> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655d68>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655048> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab655908> | |
_stacktrace = <traceback object at 0xffffab37ab08> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab6559e8>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab655d68> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab6558d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
____ ERROR at setup of test_setting_heapsize_with_an_environment_variable[docker://elasticsearch1] _____ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab27d978> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab2fd518>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab27dcf8> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab27d978> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab27dcf8>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab27dbe0> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab2fd208>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab2fd518> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab27d978> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab2fd518>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab27dcf8> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab27d978> | |
_stacktrace = <traceback object at 0xffffab2dea48> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab2fd208>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab2fd518> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab27d400>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
ERROR at setup of test_parameter_containing_underscore_with_an_environment_variable[docker://elasticsearch1] | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab645208> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6457b8>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab645128> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab645208> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab645128>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab2fb9e8> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab645f98>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6457b8> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab645208> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6457b8>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab645128> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab645208> | |
_stacktrace = <traceback object at 0xffffab620788> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab645f98>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab6457b8> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab645278>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
ERROR at setup of test_envar_not_including_a_dot_is_not_presented_to_elasticsearch[docker://elasticsearch1] | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab324278> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab324c18>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3243c8> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab324278> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3243c8>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab324400> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3240f0>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab324c18> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab324278> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab324c18>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3243c8> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
chunked=chunked) | |
# If we're going to release the connection in ``finally:``, then | |
# the response doesn't need to know about the connection. Otherwise | |
# it will also try to release it and we'll have a double-release | |
# mess. | |
response_conn = conn if not release_conn else None | |
# Pass method to Response for length checking | |
response_kw['request_method'] = method | |
# Import httplib's response into our own wrapper object | |
response = self.ResponseCls.from_httplib(httplib_response, | |
pool=self, | |
connection=response_conn, | |
retries=retries, | |
**response_kw) | |
# Everything went great! | |
clean_exit = True | |
except queue.Empty: | |
# Timed out by queue. | |
raise EmptyPoolError(self, "No pool connections are available.") | |
except (BaseSSLError, CertificateError) as e: | |
# Close the connection. If a connection is reused on which there | |
# was a Certificate error, the next request will certainly raise | |
# another Certificate error. | |
clean_exit = False | |
raise SSLError(e) | |
except SSLError: | |
# Treat SSLError separately from BaseSSLError to preserve | |
# traceback. | |
clean_exit = False | |
raise | |
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e: | |
# Discard the connection for these exceptions. It will be | |
# be replaced during the next _get_conn() call. | |
clean_exit = False | |
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy: | |
e = ProxyError('Cannot connect to proxy.', e) | |
elif isinstance(e, (SocketError, HTTPException)): | |
e = ProtocolError('Connection aborted.', e) | |
retries = retries.increment(method, url, error=e, _pool=self, | |
> _stacktrace=sys.exc_info()[2]) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health' | |
response = None | |
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8>: Failed to establish a new connection: [Errno 111] Connection refused',) | |
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab324278> | |
_stacktrace = <traceback object at 0xffffab33abc8> | |
def increment(self, method=None, url=None, response=None, error=None, | |
_pool=None, _stacktrace=None): | |
""" Return a new Retry object with incremented retry counters. | |
:param response: A response object, or None, if the server did not | |
return a response. | |
:type response: :class:`~urllib3.response.HTTPResponse` | |
:param Exception error: An error encountered during the request, or | |
None if the response was received successfully. | |
:return: A new ``Retry`` object. | |
""" | |
if self.total is False and error: | |
# Disabled, indicate to re-raise the error. | |
raise six.reraise(type(error), error, _stacktrace) | |
total = self.total | |
if total is not None: | |
total -= 1 | |
connect = self.connect | |
read = self.read | |
redirect = self.redirect | |
cause = 'unknown' | |
status = None | |
redirect_location = None | |
if error and self._is_connection_error(error): | |
# Connect retry? | |
if connect is False: | |
raise six.reraise(type(error), error, _stacktrace) | |
elif connect is not None: | |
connect -= 1 | |
elif error and self._is_read_error(error): | |
# Read retry? | |
if read is False or not self._is_method_retryable(method): | |
raise six.reraise(type(error), error, _stacktrace) | |
elif read is not None: | |
read -= 1 | |
elif response and response.get_redirect_location(): | |
# Redirect retry? | |
if redirect is not None: | |
redirect -= 1 | |
cause = 'too many redirects' | |
redirect_location = response.get_redirect_location() | |
status = response.status | |
else: | |
# Incrementing because of a server error like a 500 in | |
# status_forcelist and a the given method is in the whitelist | |
cause = ResponseError.GENERIC_ERROR | |
if response and response.status: | |
cause = ResponseError.SPECIFIC_ERROR.format( | |
status_code=response.status) | |
status = response.status | |
history = self.history + (RequestHistory(method, url, error, status, redirect_location),) | |
new_retry = self.new( | |
total=total, | |
connect=connect, read=read, redirect=redirect, | |
history=history) | |
if new_retry.is_exhausted(): | |
> raise MaxRetryError(_pool, url, error or ResponseError(cause)) | |
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError | |
During handling of the above exception, another exception occurred: | |
host = <testinfra.host.Host object at 0xffffaba58898> | |
@fixture() | |
def elasticsearch(host): | |
class Elasticsearch(): | |
bootstrap_pwd = "pleasechangeme" | |
def __init__(self): | |
self.url = 'http://localhost:9200' | |
if config.getoption('--image-flavor') == 'platinum': | |
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd) | |
else: | |
self.auth = '' | |
self.assert_healthy() | |
self.process = host.process.get(comm='java') | |
# Start each test with a clean slate. | |
assert self.load_index_template().status_code == codes.ok | |
assert self.delete().status_code == codes.ok | |
def reset(self): | |
"""Reset Elasticsearch by destroying and recreating the containers.""" | |
pytest_unconfigure(config) | |
pytest_configure(config) | |
@retry(**retry_settings) | |
def get(self, location='/', **kwargs): | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def put(self, location='/', **kwargs): | |
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def post(self, location='/%s/1' % default_index, **kwargs): | |
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs) | |
@retry(**retry_settings) | |
def delete(self, location='/_all', **kwargs): | |
return requests.delete(self.url + location, auth=self.auth, **kwargs) | |
def get_root_page(self): | |
return self.get('/').json() | |
def get_cluster_health(self): | |
return self.get('/_cluster/health').json() | |
def get_node_count(self): | |
return self.get_cluster_health()['number_of_nodes'] | |
def get_cluster_status(self): | |
return self.get_cluster_health()['status'] | |
def get_node_os_stats(self): | |
"""Return an array of node OS statistics""" | |
return self.get('/_nodes/stats/os').json()['nodes'].values() | |
def get_node_plugins(self): | |
"""Return an array of node plugins""" | |
nodes = self.get('/_nodes/plugins').json()['nodes'].values() | |
return [node['plugins'] for node in nodes] | |
def get_node_thread_pool_bulk_queue_size(self): | |
"""Return an array of thread_pool bulk queue size settings for nodes""" | |
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values() | |
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes] | |
def get_node_jvm_stats(self): | |
"""Return an array of node JVM statistics""" | |
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values() | |
return [node['jvm'] for node in nodes] | |
def get_node_mlockall_state(self): | |
"""Return an array of the mlockall value""" | |
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values() | |
return [node['process']['mlockall'] for node in nodes] | |
@retry(**retry_settings) | |
def set_password(self, username, password): | |
return self.put('/_xpack/security/user/%s/_password' % username, | |
json={"password": password}) | |
def query_all(self, index=default_index): | |
return self.get('/%s/_search' % index) | |
def create_index(self, index=default_index): | |
return self.put('/' + index) | |
def delete_index(self, index=default_index): | |
return self.delete('/' + index) | |
def load_index_template(self): | |
template = { | |
'template': '*', | |
'settings': { | |
'number_of_shards': 2, | |
'number_of_replicas': 0, | |
} | |
} | |
return self.put('/_template/univeral_template', json=template) | |
def load_test_data(self): | |
self.create_index() | |
return self.post( | |
data=open('tests/testdata.json').read(), | |
params={"refresh": "wait_for"} | |
) | |
@retry(**retry_settings) | |
def assert_healthy(self): | |
if config.getoption('--single-node'): | |
assert self.get_node_count() == 1 | |
assert self.get_cluster_status() in ['yellow', 'green'] | |
else: | |
assert self.get_node_count() == 2 | |
assert self.get_cluster_status() == 'green' | |
def uninstall_plugin(self, plugin_name): | |
# This will run on only one host, but this is ok for the moment | |
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images | |
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin", | |
"-s", | |
"remove", | |
"{}".format(plugin_name)])) | |
# Reset elasticsearch to its original state | |
self.reset() | |
return uninstall_output | |
def assert_bind_mount_data_dir_is_writable(self, | |
datadir1="tests/datadir1", | |
datadir2="tests/datadir2", | |
process_uid='', | |
datadir_uid=1000, | |
datadir_gid=0): | |
cwd = os.getcwd() | |
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1), | |
os.path.join(cwd, datadir2)) | |
config.option.mount_datavolume1 = datavolume1_path | |
config.option.mount_datavolume2 = datavolume2_path | |
# Yaml variables in docker-compose (`user:`) need to be a strings | |
config.option.process_uid = "{!s}".format(process_uid) | |
# Ensure defined data dirs are empty before tests | |
proc1 = delete_dir(datavolume1_path) | |
proc2 = delete_dir(datavolume2_path) | |
assert proc1.returncode == 0 | |
assert proc2.returncode == 0 | |
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid) | |
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid) | |
# Force Elasticsearch to re-run with new parameters | |
self.reset() | |
self.assert_healthy() | |
# Revert Elasticsearch back to its datadir defaults for the next tests | |
config.option.mount_datavolume1 = None | |
config.option.mount_datavolume2 = None | |
config.option.process_uid = '' | |
self.reset() | |
# Finally clean up the temp dirs used for bind-mounts | |
delete_dir(datavolume1_path) | |
delete_dir(datavolume2_path) | |
def es_cmdline(self): | |
return host.file("/proc/1/cmdline").content_string | |
def run_command_on_host(self, command): | |
return host.run(command) | |
def get_hostname(self): | |
return host.run('hostname').stdout.strip() | |
def get_docker_log(self): | |
proc = run(['docker-compose', | |
'-f', | |
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')), | |
'logs', | |
self.get_hostname()], | |
stdout=PIPE) | |
return proc.stdout.decode() | |
def assert_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string in log | |
except AssertionError: | |
print(log) | |
raise | |
def assert_not_in_docker_log(self, string): | |
log = self.get_docker_log() | |
try: | |
assert string not in log | |
except AssertionError: | |
print(log) | |
raise | |
> return Elasticsearch() | |
tests/fixtures.py:222: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
tests/fixtures.py:33: in __init__ | |
self.assert_healthy() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:132: in assert_healthy | |
assert self.get_node_count() == 1 | |
tests/fixtures.py:69: in get_node_count | |
return self.get_cluster_health()['number_of_nodes'] | |
tests/fixtures.py:66: in get_cluster_health | |
return self.get('/_cluster/health').json() | |
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f | |
return Retrying(*dargs, **dkw).call(f, *args, **kw) | |
venv/lib/python3.6/site-packages/retrying.py:212: in call | |
raise attempt.get() | |
venv/lib/python3.6/site-packages/retrying.py:247: in get | |
six.reraise(self.value[0], self.value[1], self.value[2]) | |
venv/lib/python3.6/site-packages/six.py:693: in reraise | |
raise value | |
venv/lib/python3.6/site-packages/retrying.py:200: in call | |
attempt = Attempt(fn(*args, **kwargs), attempt_number, False) | |
tests/fixtures.py:48: in get | |
return requests.get(self.url + location, auth=self.auth, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:70: in get | |
return request('get', url, params=params, **kwargs) | |
venv/lib/python3.6/site-packages/requests/api.py:56: in request | |
return session.request(method=method, url=url, **kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request | |
resp = self.send(prep, **send_kwargs) | |
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send | |
r = adapter.send(request, **kwargs) | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.adapters.HTTPAdapter object at 0xffffab3240f0>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab324c18> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
timeout=timeout | |
) | |
# Send the request. | |
else: | |
if hasattr(conn, 'proxy_pool'): | |
conn = conn.proxy_pool | |
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT) | |
try: | |
low_conn.putrequest(request.method, | |
url, | |
skip_accept_encoding=True) | |
for header, value in request.headers.items(): | |
low_conn.putheader(header, value) | |
low_conn.endheaders() | |
for i in request.body: | |
low_conn.send(hex(len(i))[2:].encode('utf-8')) | |
low_conn.send(b'\r\n') | |
low_conn.send(i) | |
low_conn.send(b'\r\n') | |
low_conn.send(b'0\r\n\r\n') | |
# Receive the response from the server | |
try: | |
# For Python 2.7+ versions, use buffering of HTTP | |
# responses | |
r = low_conn.getresponse(buffering=True) | |
except TypeError: | |
# For compatibility with Python 2.6 versions and back | |
r = low_conn.getresponse() | |
resp = HTTPResponse.from_httplib( | |
r, | |
pool=conn, | |
connection=low_conn, | |
preload_content=False, | |
decode_content=False | |
) | |
except: | |
# If we hit any problems here, clean up the connection. | |
# Then, reraise so that we can handle the actual exception. | |
low_conn.close() | |
raise | |
except (ProtocolError, socket.error) as err: | |
raise ConnectionError(err, request=request) | |
except MaxRetryError as e: | |
if isinstance(e.reason, ConnectTimeoutError): | |
# TODO: Remove this in 3.0.0: see #2811 | |
if not isinstance(e.reason, NewConnectionError): | |
raise ConnectTimeout(e, request=request) | |
if isinstance(e.reason, ResponseError): | |
raise RetryError(e, request=request) | |
if isinstance(e.reason, _ProxyError): | |
raise ProxyError(e, request=request) | |
> raise ConnectionError(e, request=request) | |
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3242e8>: Failed to establish a new connection: [Errno 111] Connection refused',)) | |
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError | |
_ ERROR at setup of test_capitalized_envvar_is_not_presented_to_elasticsearch[docker://elasticsearch1] _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
> (self.host, self.port), self.timeout, **extra_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
sock.connect(sa) | |
return sock | |
except socket.error as e: | |
err = e | |
if sock is not None: | |
sock.close() | |
sock = None | |
if err is not None: | |
> raise err | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)] | |
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, | |
source_address=None, socket_options=None): | |
"""Connect to *address* and return the socket object. | |
Convenience function. Connect to *address* (a 2-tuple ``(host, | |
port)``) and return the socket object. Passing the optional | |
*timeout* parameter will set the timeout on the socket instance | |
before attempting to connect. If no *timeout* is supplied, the | |
global default timeout setting returned by :func:`getdefaulttimeout` | |
is used. If *source_address* is set it must be a tuple of (host, port) | |
for the socket to bind as a source address before making the connection. | |
An host of '' or port 0 tells the OS to use the default. | |
""" | |
host, port = address | |
if host.startswith('['): | |
host = host.strip('[]') | |
err = None | |
# Using the value from allowed_gai_family() in the context of getaddrinfo lets | |
# us select whether to work with IPv4 DNS records, IPv6 records, or both. | |
# The original create_connection function always returns all records. | |
family = allowed_gai_family() | |
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): | |
af, socktype, proto, canonname, sa = res | |
sock = None | |
try: | |
sock = socket.socket(af, socktype, proto) | |
# If provided, set socket level options before connecting. | |
_set_socket_options(sock, socket_options) | |
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: | |
sock.settimeout(timeout) | |
if source_address: | |
sock.bind(source_address) | |
> sock.connect(sa) | |
E ConnectionRefusedError: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError | |
During handling of the above exception, another exception occurred: | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab346fd0> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab346828>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3466a0> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection | |
back into the pool once a response is received (but will release if | |
you read the entire contents of the response such as when | |
`preload_content=True`). This is useful if you're not preloading | |
the response's content immediately. You will need to call | |
``r.release_conn()`` on the response ``r`` to return the connection | |
back into the pool. If None, it takes the value of | |
``response_kw.get('preload_content', True)``. | |
:param chunked: | |
If True, urllib3 will send the body using chunked transfer | |
encoding. Otherwise, urllib3 will send the body using the standard | |
content-length form. Defaults to False. | |
:param int body_pos: | |
Position to seek to in file-like body in the event of a retry or | |
redirect. Typically this won't need to be set because urllib3 will | |
auto-populate the value when needed. | |
:param \\**response_kw: | |
Additional parameters are passed to | |
:meth:`urllib3.response.HTTPResponse.from_httplib` | |
""" | |
if headers is None: | |
headers = self.headers | |
if not isinstance(retries, Retry): | |
retries = Retry.from_int(retries, redirect=redirect, default=self.retries) | |
if release_conn is None: | |
release_conn = response_kw.get('preload_content', True) | |
# Check host | |
if assert_same_host and not self.is_same_host(url): | |
raise HostChangedError(self, url, retries) | |
conn = None | |
# Track whether `conn` needs to be released before | |
# returning/raising/recursing. Update this variable if necessary, and | |
# leave `release_conn` constant throughout the function. That way, if | |
# the function recurses, the original value of `release_conn` will be | |
# passed down into the recursive call, and its value will be respected. | |
# | |
# See issue #651 [1] for details. | |
# | |
# [1] <https://github.com/shazow/urllib3/issues/651> | |
release_this_conn = release_conn | |
# Merge the proxy headers. Only do this in HTTP. We have to copy the | |
# headers dict so we can safely change it without those changes being | |
# reflected in anyone else's copy. | |
if self.scheme == 'http': | |
headers = headers.copy() | |
headers.update(self.proxy_headers) | |
# Must keep the exception bound to a separate variable or else Python 3 | |
# complains about UnboundLocalError. | |
err = None | |
# Keep track of whether we cleanly exited the except block. This | |
# ensures we do proper cleanup in finally. | |
clean_exit = False | |
# Rewind body position, if needed. Record current position | |
# for future rewinds in the event of a redirect/retry. | |
body_pos = set_file_position(body, body_pos) | |
try: | |
# Request a connection from the queue. | |
timeout_obj = self._get_timeout(timeout) | |
conn = self._get_conn(timeout=pool_timeout) | |
conn.timeout = timeout_obj.connect_timeout | |
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None) | |
if is_new_proxy_conn: | |
self._prepare_proxy(conn) | |
# Make the request on the httplib connection object. | |
httplib_response = self._make_request(conn, method, url, | |
timeout=timeout_obj, | |
body=body, headers=headers, | |
> chunked=chunked) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab346fd0> | |
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8>, method = 'GET' | |
url = '/_cluster/health' | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3466a0>, chunked = False | |
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}} | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab346160> | |
def _make_request(self, conn, method, url, timeout=_Default, chunked=False, | |
**httplib_request_kw): | |
""" | |
Perform a request on a given urllib connection object taken from our | |
pool. | |
:param conn: | |
a connection from one of our connection pools | |
:param timeout: | |
Socket timeout in seconds for the request. This can be a | |
float or integer, which will set the same timeout value for | |
the socket connect and the socket read, or an instance of | |
:class:`urllib3.util.Timeout`, which gives you more fine-grained | |
control over your timeouts. | |
""" | |
self.num_requests += 1 | |
timeout_obj = self._get_timeout(timeout) | |
timeout_obj.start_connect() | |
conn.timeout = timeout_obj.connect_timeout | |
# Trigger any extra validation we need to do. | |
try: | |
self._validate_conn(conn) | |
except (SocketTimeout, BaseSSLError) as e: | |
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. | |
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) | |
raise | |
# conn.request() calls httplib.*.request, not the method in | |
# urllib3.request. It also calls makefile (recv) on the socket. | |
if chunked: | |
conn.request_chunked(method, url, **httplib_request_kw) | |
else: | |
> conn.request(method, url, **httplib_request_kw) | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
def request(self, method, url, body=None, headers={}, *, | |
encode_chunked=False): | |
"""Send a complete request to the server.""" | |
> self._send_request(method, url, body, headers, encode_chunked) | |
/usr/lib/python3.6/http/client.py:1239: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8>, method = 'GET' | |
url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
encode_chunked = False | |
def _send_request(self, method, url, body, headers, encode_chunked): | |
# Honor explicitly requested Host: and Accept-Encoding: headers. | |
header_names = frozenset(k.lower() for k in headers) | |
skips = {} | |
if 'host' in header_names: | |
skips['skip_host'] = 1 | |
if 'accept-encoding' in header_names: | |
skips['skip_accept_encoding'] = 1 | |
self.putrequest(method, url, **skips) | |
# chunked encoding will happen if HTTP/1.1 is used and either | |
# the caller passes encode_chunked=True or the following | |
# conditions hold: | |
# 1. content-length has not been explicitly set | |
# 2. the body is a file or iterable, but not a str or bytes-like | |
# 3. Transfer-Encoding has NOT been explicitly set by the caller | |
if 'content-length' not in header_names: | |
# only chunk body if not explicitly set for backwards | |
# compatibility, assuming the client code is already handling the | |
# chunking | |
if 'transfer-encoding' not in header_names: | |
# if content-length cannot be automatically determined, fall | |
# back to chunked encoding | |
encode_chunked = False | |
content_length = self._get_content_length(body, method) | |
if content_length is None: | |
if body is not None: | |
if self.debuglevel > 0: | |
print('Unable to determine size of %r' % body) | |
encode_chunked = True | |
self.putheader('Transfer-Encoding', 'chunked') | |
else: | |
self.putheader('Content-Length', str(content_length)) | |
else: | |
encode_chunked = False | |
for hdr, value in headers.items(): | |
self.putheader(hdr, value) | |
if isinstance(body, str): | |
# RFC 2616 Section 3.7.1 says that text default has a | |
# default charset of iso-8859-1. | |
body = _encode(body, 'body') | |
> self.endheaders(body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1285: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8> | |
message_body = None | |
def endheaders(self, message_body=None, *, encode_chunked=False): | |
"""Indicate that the last header line has been sent to the server. | |
This method sends the request to the server. The optional message_body | |
argument can be used to pass a message body associated with the | |
request. | |
""" | |
if self.__state == _CS_REQ_STARTED: | |
self.__state = _CS_REQ_SENT | |
else: | |
raise CannotSendHeader() | |
> self._send_output(message_body, encode_chunked=encode_chunked) | |
/usr/lib/python3.6/http/client.py:1234: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8> | |
message_body = None, encode_chunked = False | |
def _send_output(self, message_body=None, encode_chunked=False): | |
"""Send the currently buffered request and clear the buffer. | |
Appends an extra \\r\\n to the buffer. | |
A message_body may be specified, to be appended to the request. | |
""" | |
self._buffer.extend((b"", b"")) | |
msg = b"\r\n".join(self._buffer) | |
del self._buffer[:] | |
> self.send(msg) | |
/usr/lib/python3.6/http/client.py:1026: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8> | |
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' | |
def send(self, data): | |
"""Send `data' to the server. | |
``data`` can be a string object, a bytes object, an array object, a | |
file-like object that supports a .read() method, or an iterable object. | |
""" | |
if self.sock is None: | |
if self.auto_open: | |
> self.connect() | |
/usr/lib/python3.6/http/client.py:964: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8> | |
def connect(self): | |
> conn = self._new_conn() | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8> | |
def _new_conn(self): | |
""" Establish a socket connection and set nodelay settings on it. | |
:return: New socket connection. | |
""" | |
extra_kw = {} | |
if self.source_address: | |
extra_kw['source_address'] = self.source_address | |
if self.socket_options: | |
extra_kw['socket_options'] = self.socket_options | |
try: | |
conn = connection.create_connection( | |
(self.host, self.port), self.timeout, **extra_kw) | |
except SocketTimeout as e: | |
raise ConnectTimeoutError( | |
self, "Connection to %s timed out. (connect timeout=%s)" % | |
(self.host, self.timeout)) | |
except SocketError as e: | |
raise NewConnectionError( | |
> self, "Failed to establish a new connection: %s" % e) | |
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffab3464a8>: Failed to establish a new connection: [Errno 111] Connection refused | |
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError | |
During handling of the above exception, another exception occurred: | |
self = <requests.adapters.HTTPAdapter object at 0xffffab2c6828>, request = <PreparedRequest [GET]> | |
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab346828> | |
verify = True, cert = None, proxies = OrderedDict() | |
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | |
"""Sends PreparedRequest object. Returns Response object. | |
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | |
:param stream: (optional) Whether to stream the request content. | |
:param timeout: (optional) How long to wait for the server to send | |
data before giving up, as a float, or a :ref:`(connect timeout, | |
read timeout) <timeouts>` tuple. | |
:type timeout: float or tuple | |
:param verify: (optional) Whether to verify SSL certificates. | |
:param cert: (optional) Any user-provided SSL certificate to be trusted. | |
:param proxies: (optional) The proxies dictionary to apply to the request. | |
:rtype: requests.Response | |
""" | |
conn = self.get_connection(request.url, proxies) | |
self.cert_verify(conn, request.url, verify, cert) | |
url = self.request_url(request, proxies) | |
self.add_headers(request) | |
chunked = not (request.body is None or 'Content-Length' in request.headers) | |
if isinstance(timeout, tuple): | |
try: | |
connect, read = timeout | |
timeout = TimeoutSauce(connect=connect, read=read) | |
except ValueError as e: | |
# this may raise a string formatting error. | |
err = ("Invalid timeout {0}. Pass a (connect, read) " | |
"timeout tuple, or a single float to set " | |
"both timeouts to the same value".format(timeout)) | |
raise ValueError(err) | |
else: | |
timeout = TimeoutSauce(connect=timeout, read=timeout) | |
try: | |
if not chunked: | |
resp = conn.urlopen( | |
method=request.method, | |
url=url, | |
body=request.body, | |
headers=request.headers, | |
redirect=False, | |
assert_same_host=False, | |
preload_content=False, | |
decode_content=False, | |
retries=self.max_retries, | |
> timeout=timeout | |
) | |
venv/lib/python3.6/site-packages/requests/adapters.py:423: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffab346fd0> | |
method = 'GET', url = '/_cluster/health', body = None | |
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} | |
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False | |
assert_same_host = False | |
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab346828>, pool_timeout = None | |
release_conn = False, chunked = False, body_pos = None | |
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True | |
err = None, clean_exit = False | |
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffab3466a0> | |
is_new_proxy_conn = False | |
def urlopen(self, method, url, body=None, headers=None, retries=None, | |
redirect=True, assert_same_host=True, timeout=_Default, | |
pool_timeout=None, release_conn=None, chunked=False, | |
body_pos=None, **response_kw): | |
""" | |
Get a connection from the pool and perform an HTTP request. This is the | |
lowest level call for making a request, so you'll need to specify all | |
the raw details. | |
.. note:: | |
More commonly, it's appropriate to use a convenience method provided | |
by :class:`.RequestMethods`, such as :meth:`request`. | |
.. note:: | |
`release_conn` will only behave as expected if | |
`preload_content=False` because we want to make | |
`preload_content=False` the default behaviour someday soon without | |
breaking backwards compatibility. | |
:param method: | |
HTTP request method (such as GET, POST, PUT, etc.) | |
:param body: | |
Data to send in the request body (useful for creating | |
POST requests, see HTTPConnectionPool.post_url for | |
more convenience). | |
:param headers: | |
Dictionary of custom headers to send, such as User-Agent, | |
If-None-Match, etc. If None, pool headers are used. If provided, | |
these headers completely replace any pool-specific headers. | |
:param retries: | |
Configure the number of retries to allow before raising a | |
:class:`~urllib3.exceptions.MaxRetryError` exception. | |
Pass ``None`` to retry until you receive a response. Pass a | |
:class:`~urllib3.util.retry.Retry` object for fine-grained control | |
over different types of retries. | |
Pass an integer number to retry connection errors that many times, | |
but no other types of errors. Pass zero to never retry. | |
If ``False``, then retries are disabled and any exception is raised | |
immediately. Also, instead of raising a MaxRetryError on redirects, | |
the redirect response will be returned. | |
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. | |
:param redirect: | |
If True, automatically handle redirects (status codes 301, 302, | |
303, 307, 308). Each redirect counts as a retry. Disabling retries | |
will disable redirect, too. | |
:param assert_same_host: | |
If ``True``, will make sure that the host of the pool requests is | |
consistent else will raise HostChangedError. When False, you can | |
use the pool on an HTTP proxy and request foreign hosts. | |
:param timeout: | |
If specified, overrides the default timeout for this one | |
request. It may be a float (in seconds) or an instance of | |
:class:`urllib3.util.Timeout`. | |
:param pool_timeout: | |
If set and the pool is set to block=True, then this method will | |
block for ``pool_timeout`` seconds and raise EmptyPoolError if no | |
connection is available within the time period. | |
:param release_conn: | |
If False, then the urlopen call will not release the connection |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment