Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save lag-linaro/56e342f68f6b7791d47b4068a8cca380 to your computer and use it in GitHub Desktop.
Save lag-linaro/56e342f68f6b7791d47b4068a8cca380 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
~/projects/docker-hub/elasticsearch-docker-new [6.2]$ ELASTIC_VERSION=6.2.4 make
if [[ -f "docker-compose-oss.yml" ]]; then docker-compose -f docker-compose-oss.yml down && docker-compose -f docker-compose-oss.yml rm -f -v; fi; rm -f docker-compose-oss.yml; rm -f tests/docker-compose-oss.yml; rm -f build/elasticsearch/Dockerfile-oss; if [[ -f "docker-compose-basic.yml" ]]; then docker-compose -f docker-compose-basic.yml down && docker-compose -f docker-compose-basic.yml rm -f -v; fi; rm -f docker-compose-basic.yml; rm -f tests/docker-compose-basic.yml; rm -f build/elasticsearch/Dockerfile-basic; if [[ -f "docker-compose-platinum.yml" ]]; then docker-compose -f docker-compose-platinum.yml down && docker-compose -f docker-compose-platinum.yml rm -f -v; fi; rm -f docker-compose-platinum.yml; rm -f tests/docker-compose-platinum.yml; rm -f build/elasticsearch/Dockerfile-platinum;
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string.
Removing network elasticsearchdockernew_esnet
WARNING: Network elasticsearchdockernew_esnet not found.
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string.
No stopped containers
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string.
Removing network elasticsearchdockernew_esnet
WARNING: Network elasticsearchdockernew_esnet not found.
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string.
No stopped containers
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string.
Removing network elasticsearchdockernew_esnet
WARNING: Network elasticsearchdockernew_esnet not found.
WARNING: The PROCESS_UID variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME1 variable is not set. Defaulting to a blank string.
WARNING: The DATA_VOLUME2 variable is not set. Defaulting to a blank string.
No stopped containers
jinja2 -D elastic_version='6.2.4' -D staging_build_num='' -D artifacts_dir='' -D image_flavor='oss' templates/Dockerfile.j2 > build/elasticsearch/Dockerfile-oss; jinja2 -D elastic_version='6.2.4' -D staging_build_num='' -D artifacts_dir='' -D image_flavor='basic' templates/Dockerfile.j2 > build/elasticsearch/Dockerfile-basic; jinja2 -D elastic_version='6.2.4' -D staging_build_num='' -D artifacts_dir='' -D image_flavor='platinum' templates/Dockerfile.j2 > build/elasticsearch/Dockerfile-platinum;
pyfiglet -f puffy -w 160 "Building: oss"; docker build -t docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4 -f build/elasticsearch/Dockerfile-oss build/elasticsearch; if [[ oss == basic ]]; then docker tag docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4 docker.elastic.co/elasticsearch/elasticsearch:6.2.4; fi; pyfiglet -f puffy -w 160 "Building: basic"; docker build -t docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.4 -f build/elasticsearch/Dockerfile-basic build/elasticsearch; if [[ basic == basic ]]; then docker tag docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.4 docker.elastic.co/elasticsearch/elasticsearch:6.2.4; fi; pyfiglet -f puffy -w 160 "Building: platinum"; docker build -t docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4 -f build/elasticsearch/Dockerfile-platinum build/elasticsearch; if [[ platinum == basic ]]; then docker tag docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4 docker.elastic.co/elasticsearch/elasticsearch:6.2.4; fi;
___ _ _
( _`\ _ (_ ) ( ) _
| (_) ) _ _ (_) | | _| |(_) ___ __ _ _ ___ ___
| _ <'( ) ( )| | | | /'_` || |/' _ `\ /'_ `\(_) /'_`\ /',__)/',__)
| (_) )| (_) || | | | ( (_| || || ( ) |( (_) | _ ( (_) )\__, \\__, \
(____/'`\___/'(_)(___)`\__,_)(_)(_) (_)`\__ |(_) `\___/'(____/(____/
( )_) |
\___/'
free(): invalid pointer
SIGABRT: abort
PC=0xffff823cb4d8 m=0 sigcode=18446744073709551610
signal arrived during cgo execution
goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x49fb28, 0x442009dcd8, 0x29)
/usr/lib/go-1.8/src/runtime/cgocall.go:131 +0x9c fp=0x442009dca0 sp=0x442009dc60
github.com/docker/docker-credential-helpers/secretservice._Cfunc_free(0x3c631270)
github.com/docker/docker-credential-helpers/secretservice/_obj/_cgo_gotypes.go:111 +0x38 fp=0x442009dcd0 sp=0x442009dca0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List.func5(0x3c631270)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:96 +0x44 fp=0x442009dd00 sp=0x442009dcd0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List(0x0, 0x554f80, 0x44200163b0)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:97 +0x1c4 fp=0x442009dda0 sp=0x442009dd00
github.com/docker/docker-credential-helpers/secretservice.(*Secretservice).List(0x57b3b8, 0x40ed60, 0x442000c001, 0x40e80c)
<autogenerated>:4 +0x48 fp=0x442009dde0 sp=0x442009dda0
github.com/docker/docker-credential-helpers/credentials.List(0x555ac0, 0x57b3b8, 0x555000, 0x442000e018, 0x0, 0x0)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:145 +0x28 fp=0x442009de60 sp=0x442009dde0
github.com/docker/docker-credential-helpers/credentials.HandleCommand(0x555ac0, 0x57b3b8, 0xfffffa97bf51, 0x4, 0x554fc0, 0x442000e010, 0x555000, 0x442000e018, 0x4420016330, 0x49f99c)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:60 +0x12c fp=0x442009ded0 sp=0x442009de60
github.com/docker/docker-credential-helpers/credentials.Serve(0x555ac0, 0x57b3b8)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:41 +0x1a0 fp=0x442009df50 sp=0x442009ded0
main.main()
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/secretservice/cmd/main_linux.go:9 +0x40 fp=0x442009df80 sp=0x442009df50
runtime.main()
/usr/lib/go-1.8/src/runtime/proc.go:185 +0x1f4 fp=0x442009dfd0 sp=0x442009df80
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_arm64.s:981 +0x4 fp=0x442009dfd0 sp=0x442009dfd0
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_arm64.s:981 +0x4
r0 0x0
r1 0xfffffa97a6b8
r2 0x0
r3 0x8
r4 0x0
r5 0xfffffa97a6b8
r6 0xffffffffffffffff
r7 0xffffffffffffffff
r8 0x87
r9 0xffffffffffffffff
r10 0xffffffffffffffff
r11 0xffffffffffffffff
r12 0xffffffffffffffff
r13 0xffffffffffffffff
r14 0x8
r15 0x0
r16 0x54e168
r17 0xffff82410838
r18 0xffff824eca70
r19 0xffff824eb000
r20 0x6
r21 0xffff8270f000
r22 0xfffffa97a8f0
r23 0x2
r24 0xffff824eb000
r25 0x1
r26 0xffff824c2778
r27 0x2
r28 0xfffffa97a920
r29 0xfffffa97a690
lr 0xffff823cb464
sp 0xfffffa97a690
pc 0xffff823cb4d8
fault 0x0
Sending build context to Docker daemon 27.65kB
Step 1/27 : FROM centos:7 AS prep_es_files
---> 5f65840122d0
Step 2/27 : ENV PATH /usr/share/elasticsearch/bin:$PATH
---> Using cache
---> 5654db35b6ae
Step 3/27 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk
---> Using cache
---> 362271201b29
Step 4/27 : RUN yum install -y java-1.8.0-openjdk-headless unzip which
---> Using cache
---> 8d0012f3d9a0
Step 5/27 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -d /usr/share/elasticsearch elasticsearch
---> Using cache
---> 3294f301a6f4
Step 6/27 : WORKDIR /usr/share/elasticsearch
---> Using cache
---> 15bf8f40105d
Step 7/27 : USER 1000
---> Using cache
---> bab9e7d09e01
Step 8/27 : RUN curl -fsSL https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz | tar zx --strip-components=1
---> Using cache
---> 9b957496565b
Step 9/27 : RUN set -ex && for esdirs in config data logs; do mkdir -p "$esdirs"; done
---> Using cache
---> 7552ffa6d834
Step 10/27 : RUN for PLUGIN in ingest-user-agent ingest-geoip; do \elasticsearch-plugin install --batch "$PLUGIN"; done
---> Using cache
---> ae2b7b9a7988
Step 11/27 : COPY --chown=1000:0 elasticsearch.yml log4j2.properties config/
---> Using cache
---> a9f24c717cd1
Step 12/27 : USER 0
---> Using cache
---> 0a43315006e7
Step 13/27 : RUN chown -R elasticsearch:0 . && chmod -R g=u /usr/share/elasticsearch
---> Using cache
---> 984916dbec72
Step 14/27 : FROM centos:7
---> 5f65840122d0
Step 15/27 : LABEL maintainer "Elastic Docker Team <docker@elastic.co>"
---> Using cache
---> de3f8cd76e43
Step 16/27 : ENV ELASTIC_CONTAINER true
---> Using cache
---> 80b9d3d06e52
Step 17/27 : ENV PATH /usr/share/elasticsearch/bin:$PATH
---> Using cache
---> 448ee7a3cf01
Step 18/27 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk
---> Using cache
---> 63b6fba973ab
Step 19/27 : RUN yum update -y && yum install -y nc java-1.8.0-openjdk-headless unzip wget which && yum clean all
---> Using cache
---> f82b703d0457
Step 20/27 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -G 0 -d /usr/share/elasticsearch elasticsearch && chmod 0775 /usr/share/elasticsearch && chgrp 0 /usr/share/elasticsearch
---> Using cache
---> 231b1d56e5cf
Step 21/27 : WORKDIR /usr/share/elasticsearch
---> Using cache
---> 9512b2334011
Step 22/27 : COPY --from=prep_es_files --chown=1000:0 /usr/share/elasticsearch /usr/share/elasticsearch
---> Using cache
---> d6771d518461
Step 23/27 : COPY --chown=1000:0 bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
---> Using cache
---> ef57455097b8
Step 24/27 : RUN chgrp 0 /usr/local/bin/docker-entrypoint.sh && chmod g=u /etc/passwd && chmod 0775 /usr/local/bin/docker-entrypoint.sh
---> Using cache
---> b466cfdeae9a
Step 25/27 : EXPOSE 9200 9300
---> Using cache
---> 4d1a05410fa5
Step 26/27 : ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
---> Using cache
---> f953e41a00e9
Step 27/27 : CMD ["eswrapper"]
---> Using cache
---> ef972f99b635
Successfully built ef972f99b635
Successfully tagged docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
___ _ _ _
( _`\ _ (_ ) ( ) _ ( ) _
| (_) ) _ _ (_) | | _| |(_) ___ __ _ | |_ _ _ ___ (_) ___
| _ <'( ) ( )| | | | /'_` || |/' _ `\ /'_ `\(_) | '_`\ /'_` )/',__)| | /'___)
| (_) )| (_) || | | | ( (_| || || ( ) |( (_) | _ | |_) )( (_| |\__, \| |( (___
(____/'`\___/'(_)(___)`\__,_)(_)(_) (_)`\__ |(_) (_,__/'`\__,_)(____/(_)`\____)
( )_) |
\___/'
free(): invalid pointer
SIGABRT: abort
PC=0xffff9c2d44d8 m=0 sigcode=18446744073709551610
signal arrived during cgo execution
goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x49fb28, 0x4420095cd8, 0x29)
/usr/lib/go-1.8/src/runtime/cgocall.go:131 +0x9c fp=0x4420095ca0 sp=0x4420095c60
github.com/docker/docker-credential-helpers/secretservice._Cfunc_free(0xff4d270)
github.com/docker/docker-credential-helpers/secretservice/_obj/_cgo_gotypes.go:111 +0x38 fp=0x4420095cd0 sp=0x4420095ca0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List.func5(0xff4d270)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:96 +0x44 fp=0x4420095d00 sp=0x4420095cd0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List(0x0, 0x554f80, 0x44200163b0)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:97 +0x1c4 fp=0x4420095da0 sp=0x4420095d00
github.com/docker/docker-credential-helpers/secretservice.(*Secretservice).List(0x57b3b8, 0x40ed60, 0x442000c001, 0x40e80c)
<autogenerated>:4 +0x48 fp=0x4420095de0 sp=0x4420095da0
github.com/docker/docker-credential-helpers/credentials.List(0x555ac0, 0x57b3b8, 0x555000, 0x442000e018, 0x0, 0x0)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:145 +0x28 fp=0x4420095e60 sp=0x4420095de0
github.com/docker/docker-credential-helpers/credentials.HandleCommand(0x555ac0, 0x57b3b8, 0xffffe4149f51, 0x4, 0x554fc0, 0x442000e010, 0x555000, 0x442000e018, 0x4420016330, 0x49f99c)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:60 +0x12c fp=0x4420095ed0 sp=0x4420095e60
github.com/docker/docker-credential-helpers/credentials.Serve(0x555ac0, 0x57b3b8)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:41 +0x1a0 fp=0x4420095f50 sp=0x4420095ed0
main.main()
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/secretservice/cmd/main_linux.go:9 +0x40 fp=0x4420095f80 sp=0x4420095f50
runtime.main()
/usr/lib/go-1.8/src/runtime/proc.go:185 +0x1f4 fp=0x4420095fd0 sp=0x4420095f80
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_arm64.s:981 +0x4 fp=0x4420095fd0 sp=0x4420095fd0
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_arm64.s:981 +0x4
r0 0x0
r1 0xffffe4148db8
r2 0x0
r3 0x8
r4 0x0
r5 0xffffe4148db8
r6 0xffffffffffffffff
r7 0xffffffffffffffff
r8 0x87
r9 0xffffffffffffffff
r10 0xffffffffffffffff
r11 0xffffffffffffffff
r12 0xffffffffffffffff
r13 0xffffffffffffffff
r14 0x8
r15 0x0
r16 0x54e168
r17 0xffff9c319838
r18 0xffff9c3f5a70
r19 0xffff9c3f4000
r20 0x6
r21 0xffff9c618000
r22 0xffffe4148ff0
r23 0x2
r24 0xffff9c3f4000
r25 0x1
r26 0xffff9c3cb778
r27 0x2
r28 0xffffe4149020
r29 0xffffe4148d90
lr 0xffff9c2d4464
sp 0xffffe4148d90
pc 0xffff9c2d44d8
fault 0x0
Sending build context to Docker daemon 27.65kB
Step 1/28 : FROM centos:7 AS prep_es_files
---> 5f65840122d0
Step 2/28 : ENV PATH /usr/share/elasticsearch/bin:$PATH
---> Using cache
---> 5654db35b6ae
Step 3/28 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk
---> Using cache
---> 362271201b29
Step 4/28 : RUN yum install -y java-1.8.0-openjdk-headless unzip which
---> Using cache
---> 8d0012f3d9a0
Step 5/28 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -d /usr/share/elasticsearch elasticsearch
---> Using cache
---> 3294f301a6f4
Step 6/28 : WORKDIR /usr/share/elasticsearch
---> Using cache
---> 15bf8f40105d
Step 7/28 : USER 1000
---> Using cache
---> bab9e7d09e01
Step 8/28 : RUN curl -fsSL https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz | tar zx --strip-components=1
---> Using cache
---> 9b957496565b
Step 9/28 : RUN set -ex && for esdirs in config data logs; do mkdir -p "$esdirs"; done
---> Using cache
---> 7552ffa6d834
Step 10/28 : RUN for PLUGIN in x-pack ingest-user-agent ingest-geoip; do elasticsearch-plugin install --batch "$PLUGIN"; done
---> Using cache
---> 3fc05fe80eac
Step 11/28 : COPY --chown=1000:0 elasticsearch.yml log4j2.properties config/
---> Using cache
---> 086b7ae37230
Step 12/28 : RUN echo 'xpack.license.self_generated.type: basic' >>config/elasticsearch.yml
---> Using cache
---> 1adc4c16937e
Step 13/28 : USER 0
---> Using cache
---> 6b1cfa50e759
Step 14/28 : RUN chown -R elasticsearch:0 . && chmod -R g=u /usr/share/elasticsearch
---> Using cache
---> c02209edb197
Step 15/28 : FROM centos:7
---> 5f65840122d0
Step 16/28 : LABEL maintainer "Elastic Docker Team <docker@elastic.co>"
---> Using cache
---> de3f8cd76e43
Step 17/28 : ENV ELASTIC_CONTAINER true
---> Using cache
---> 80b9d3d06e52
Step 18/28 : ENV PATH /usr/share/elasticsearch/bin:$PATH
---> Using cache
---> 448ee7a3cf01
Step 19/28 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk
---> Using cache
---> 63b6fba973ab
Step 20/28 : RUN yum update -y && yum install -y nc java-1.8.0-openjdk-headless unzip wget which && yum clean all
---> Using cache
---> f82b703d0457
Step 21/28 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -G 0 -d /usr/share/elasticsearch elasticsearch && chmod 0775 /usr/share/elasticsearch && chgrp 0 /usr/share/elasticsearch
---> Using cache
---> 231b1d56e5cf
Step 22/28 : WORKDIR /usr/share/elasticsearch
---> Using cache
---> 9512b2334011
Step 23/28 : COPY --from=prep_es_files --chown=1000:0 /usr/share/elasticsearch /usr/share/elasticsearch
---> Using cache
---> bd52fae51c2c
Step 24/28 : COPY --chown=1000:0 bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
---> Using cache
---> da5fd625a36a
Step 25/28 : RUN chgrp 0 /usr/local/bin/docker-entrypoint.sh && chmod g=u /etc/passwd && chmod 0775 /usr/local/bin/docker-entrypoint.sh
---> Using cache
---> 00c2d89c37bf
Step 26/28 : EXPOSE 9200 9300
---> Using cache
---> de5f9bf6baca
Step 27/28 : ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
---> Using cache
---> c132fddb1d7a
Step 28/28 : CMD ["eswrapper"]
---> Using cache
---> b25b9a5b54e7
Successfully built b25b9a5b54e7
Successfully tagged docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.4
___ _ _ _ _
( _`\ _ (_ ) ( ) _ (_ ) ( )_ _
| (_) ) _ _ (_) | | _| |(_) ___ __ _ _ _ | | _ _ | ,_)(_) ___ _ _ ___ ___
| _ <'( ) ( )| | | | /'_` || |/' _ `\ /'_ `\(_) ( '_`\ | | /'_` )| | | |/' _ `\( ) ( )/' _ ` _ `\
| (_) )| (_) || | | | ( (_| || || ( ) |( (_) | _ | (_) ) | | ( (_| || |_ | || ( ) || (_) || ( ) ( ) |
(____/'`\___/'(_)(___)`\__,_)(_)(_) (_)`\__ |(_) | ,__/'(___)`\__,_)`\__)(_)(_) (_)`\___/'(_) (_) (_)
( )_) | | |
\___/' (_)
free(): invalid pointer
SIGABRT: abort
PC=0xffffbec674d8 m=0 sigcode=18446744073709551610
signal arrived during cgo execution
goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x49fb28, 0x4420095cd8, 0x29)
/usr/lib/go-1.8/src/runtime/cgocall.go:131 +0x9c fp=0x4420095ca0 sp=0x4420095c60
github.com/docker/docker-credential-helpers/secretservice._Cfunc_free(0x20984270)
github.com/docker/docker-credential-helpers/secretservice/_obj/_cgo_gotypes.go:111 +0x38 fp=0x4420095cd0 sp=0x4420095ca0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List.func5(0x20984270)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:96 +0x44 fp=0x4420095d00 sp=0x4420095cd0
github.com/docker/docker-credential-helpers/secretservice.Secretservice.List(0x0, 0x554f80, 0x44200163b0)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/secretservice/secretservice_linux.go:97 +0x1c4 fp=0x4420095da0 sp=0x4420095d00
github.com/docker/docker-credential-helpers/secretservice.(*Secretservice).List(0x57b3b8, 0x40ed60, 0x442000c001, 0x40e80c)
<autogenerated>:4 +0x48 fp=0x4420095de0 sp=0x4420095da0
github.com/docker/docker-credential-helpers/credentials.List(0x555ac0, 0x57b3b8, 0x555000, 0x442000e018, 0x0, 0x0)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:145 +0x28 fp=0x4420095e60 sp=0x4420095de0
github.com/docker/docker-credential-helpers/credentials.HandleCommand(0x555ac0, 0x57b3b8, 0xffffc62baf51, 0x4, 0x554fc0, 0x442000e010, 0x555000, 0x442000e018, 0x4420016330, 0x49f99c)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:60 +0x12c fp=0x4420095ed0 sp=0x4420095e60
github.com/docker/docker-credential-helpers/credentials.Serve(0x555ac0, 0x57b3b8)
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/obj-aarch64-linux-gnu/src/github.com/docker/docker-credential-helpers/credentials/credentials.go:41 +0x1a0 fp=0x4420095f50 sp=0x4420095ed0
main.main()
/build/golang-github-docker-docker-credential-helpers-iveBZG/golang-github-docker-docker-credential-helpers-0.5.0/secretservice/cmd/main_linux.go:9 +0x40 fp=0x4420095f80 sp=0x4420095f50
runtime.main()
/usr/lib/go-1.8/src/runtime/proc.go:185 +0x1f4 fp=0x4420095fd0 sp=0x4420095f80
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_arm64.s:981 +0x4 fp=0x4420095fd0 sp=0x4420095fd0
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/lib/go-1.8/src/runtime/asm_arm64.s:981 +0x4
r0 0x0
r1 0xffffc62b9c88
r2 0x0
r3 0x8
r4 0x0
r5 0xffffc62b9c88
r6 0xffffffffffffffff
r7 0xffffffffffffffff
r8 0x87
r9 0xffffffffffffffff
r10 0xffffffffffffffff
r11 0xffffffffffffffff
r12 0xffffffffffffffff
r13 0xffffffffffffffff
r14 0x8
r15 0x0
r16 0x54e168
r17 0xffffbecac838
r18 0xffffbed88a70
r19 0xffffbed87000
r20 0x6
r21 0xffffbefab000
r22 0xffffc62b9ec0
r23 0x2
r24 0xffffbed87000
r25 0x1
r26 0xffffbed5e778
r27 0x2
r28 0xffffc62b9ef0
r29 0xffffc62b9c60
lr 0xffffbec67464
sp 0xffffc62b9c60
pc 0xffffbec674d8
fault 0x0
Sending build context to Docker daemon 27.65kB
Step 1/29 : FROM centos:7 AS prep_es_files
---> 5f65840122d0
Step 2/29 : ENV PATH /usr/share/elasticsearch/bin:$PATH
---> Using cache
---> 5654db35b6ae
Step 3/29 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk
---> Using cache
---> 362271201b29
Step 4/29 : RUN yum install -y java-1.8.0-openjdk-headless unzip which
---> Using cache
---> 8d0012f3d9a0
Step 5/29 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -d /usr/share/elasticsearch elasticsearch
---> Using cache
---> 3294f301a6f4
Step 6/29 : WORKDIR /usr/share/elasticsearch
---> Using cache
---> 15bf8f40105d
Step 7/29 : USER 1000
---> Using cache
---> bab9e7d09e01
Step 8/29 : RUN curl -fsSL https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz | tar zx --strip-components=1
---> Using cache
---> 9b957496565b
Step 9/29 : RUN set -ex && for esdirs in config data logs; do mkdir -p "$esdirs"; done
---> Using cache
---> 7552ffa6d834
Step 10/29 : RUN for PLUGIN in x-pack ingest-user-agent ingest-geoip; do elasticsearch-plugin install --batch "$PLUGIN"; done
---> Using cache
---> 3fc05fe80eac
Step 11/29 : COPY --chown=1000:0 elasticsearch.yml log4j2.properties config/
---> Using cache
---> 086b7ae37230
Step 12/29 : COPY --chown=1000:0 x-pack/log4j2.properties config/x-pack/
---> Using cache
---> c829aa89671d
Step 13/29 : RUN echo 'xpack.license.self_generated.type: trial' >>config/elasticsearch.yml
---> Using cache
---> 05287eea4c86
Step 14/29 : USER 0
---> Using cache
---> 560f62c7bd8a
Step 15/29 : RUN chown -R elasticsearch:0 . && chmod -R g=u /usr/share/elasticsearch
---> Using cache
---> 3f8c36c8c34f
Step 16/29 : FROM centos:7
---> 5f65840122d0
Step 17/29 : LABEL maintainer "Elastic Docker Team <docker@elastic.co>"
---> Using cache
---> de3f8cd76e43
Step 18/29 : ENV ELASTIC_CONTAINER true
---> Using cache
---> 80b9d3d06e52
Step 19/29 : ENV PATH /usr/share/elasticsearch/bin:$PATH
---> Using cache
---> 448ee7a3cf01
Step 20/29 : ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk
---> Using cache
---> 63b6fba973ab
Step 21/29 : RUN yum update -y && yum install -y nc java-1.8.0-openjdk-headless unzip wget which && yum clean all
---> Using cache
---> f82b703d0457
Step 22/29 : RUN groupadd -g 1000 elasticsearch && adduser -u 1000 -g 1000 -G 0 -d /usr/share/elasticsearch elasticsearch && chmod 0775 /usr/share/elasticsearch && chgrp 0 /usr/share/elasticsearch
---> Using cache
---> 231b1d56e5cf
Step 23/29 : WORKDIR /usr/share/elasticsearch
---> Using cache
---> 9512b2334011
Step 24/29 : COPY --from=prep_es_files --chown=1000:0 /usr/share/elasticsearch /usr/share/elasticsearch
---> Using cache
---> ab9ba962097f
Step 25/29 : COPY --chown=1000:0 bin/docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
---> Using cache
---> e1269a609155
Step 26/29 : RUN chgrp 0 /usr/local/bin/docker-entrypoint.sh && chmod g=u /etc/passwd && chmod 0775 /usr/local/bin/docker-entrypoint.sh
---> Using cache
---> 151172ec2d58
Step 27/29 : EXPOSE 9200 9300
---> Using cache
---> ec74505a1a23
Step 28/29 : ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
---> Using cache
---> b30f54ce5edd
Step 29/29 : CMD ["eswrapper"]
---> Using cache
---> 3975789ab24c
Successfully built 3975789ab24c
Successfully tagged docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4
flake8 tests
jinja2 -D elastic_registry='docker.elastic.co' -D version_tag='6.2.4' -D image_flavor='oss' templates/docker-compose.yml.j2 > docker-compose-oss.yml; jinja2 -D image_flavor='oss' templates/docker-compose-fragment.yml.j2 > tests/docker-compose-oss.yml; jinja2 -D elastic_registry='docker.elastic.co' -D version_tag='6.2.4' -D image_flavor='basic' templates/docker-compose.yml.j2 > docker-compose-basic.yml; jinja2 -D image_flavor='basic' templates/docker-compose-fragment.yml.j2 > tests/docker-compose-basic.yml; jinja2 -D elastic_registry='docker.elastic.co' -D version_tag='6.2.4' -D image_flavor='platinum' templates/docker-compose.yml.j2 > docker-compose-platinum.yml; jinja2 -D image_flavor='platinum' templates/docker-compose-fragment.yml.j2 > tests/docker-compose-platinum.yml;
docker run --rm -v "/home/linaro/projects/docker-hub/elasticsearch-docker-new:/mnt" bash rm -rf /mnt/tests/datadir1 /mnt/tests/datadir2
pyfiglet -w 160 -f puffy "test: oss single"; ./bin/pytest --image-flavor=oss --single-node tests; pyfiglet -w 160 -f puffy "test: oss multi"; ./bin/pytest --image-flavor=oss tests; pyfiglet -w 160 -f puffy "test: basic single"; ./bin/pytest --image-flavor=basic --single-node tests; pyfiglet -w 160 -f puffy "test: basic multi"; ./bin/pytest --image-flavor=basic tests; pyfiglet -w 160 -f puffy "test: platinum single"; ./bin/pytest --image-flavor=platinum --single-node tests; pyfiglet -w 160 -f puffy "test: platinum multi"; ./bin/pytest --image-flavor=platinum tests;
_ _ _
( )_ ( )_ _ (_ )
| ,_) __ ___ | ,_) _ _ ___ ___ ___ (_) ___ __ | | __
| | /'__`\/',__)| | (_) /'_`\ /',__)/',__) /',__)| |/' _ `\ /'_ `\ | | /'__`\
| |_ ( ___/\__, \| |_ _ ( (_) )\__, \\__, \ \__, \| || ( ) |( (_) | | | ( ___/
`\__)`\____)(____/`\__)(_) `\___/'(____/(____/ (____/(_)(_) (_)`\__ |(___)`\____)
( )_) |
\___/'
Creating network "elasticsearchdockernew_esnet" with driver "bridge"
Creating volume "elasticsearchdockernew_esdata1" with local driver
Creating volume "elasticsearchdockernew_esdata2" with local driver
Creating elasticsearch1
========================================= test session starts ==========================================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0 -- /home/linaro/projects/docker-hub/elasticsearch-docker-new/venv/bin/python3.6
cachedir: .pytest_cache
rootdir: /home/linaro/projects/docker-hub/elasticsearch-docker-new, inifile:
plugins: testinfra-1.6.0
collected 32 items
tests/test_base_os.py::test_base_os[docker://elasticsearch1] PASSED [ 3%]
tests/test_base_os.py::test_java_home_env_var[docker://elasticsearch1] PASSED [ 6%]
tests/test_base_os.py::test_no_core_files_exist_in_root[docker://elasticsearch1] PASSED [ 9%]
tests/test_base_os.py::test_all_elasticsearch_files_are_gid_0[docker://elasticsearch1] PASSED [ 12%]
tests/test_datadirs.py::test_es_can_write_to_bind_mounted_datadir[docker://elasticsearch1] ERROR [ 15%]
tests/test_datadirs.py::test_es_can_write_to_bind_mounted_datadir_with_different_uid[docker://elasticsearch1] ERROR [ 18%]
tests/test_datadirs.py::test_es_can_run_with_random_uid_and_write_to_bind_mounted_datadir[docker://elasticsearch1] ERROR [ 21%]
tests/test_es_plugins.py::test_uninstall_xpack_plugin[docker://elasticsearch1] SKIPPED [ 25%]
tests/test_es_plugins.py::test_IngestUserAgentPlugin_is_installed[docker://elasticsearch1] ERROR [ 28%]
tests/test_es_plugins.py::test_IngestGeoIpPlugin_is_installed[docker://elasticsearch1] ERROR [ 31%]
tests/test_logging.py::test_elasticsearch_logs_are_in_docker_logs[docker://elasticsearch1] ERROR [ 34%]
tests/test_logging.py::test_security_audit_logs_are_in_docker_logs[docker://elasticsearch1] SKIPPED [ 37%]
tests/test_logging.py::test_info_level_logs_are_in_docker_logs[docker://elasticsearch1] ERROR [ 40%]
tests/test_process.py::test_process_is_pid_1[docker://elasticsearch1] ERROR [ 43%]
tests/test_process.py::test_process_is_running_as_the_correct_user[docker://elasticsearch1] ERROR [ 46%]
tests/test_process.py::test_process_is_running_the_correct_version[docker://elasticsearch1] ERROR [ 50%]
tests/test_settings.py::test_setting_node_name_with_an_environment_variable[docker://elasticsearch1] ERROR [ 53%]
tests/test_settings.py::test_setting_cluster_name_with_an_environment_variable[docker://elasticsearch1] ERROR [ 56%]
tests/test_settings.py::test_setting_heapsize_with_an_environment_variable[docker://elasticsearch1] ERROR [ 59%]
tests/test_settings.py::test_parameter_containing_underscore_with_an_environment_variable[docker://elasticsearch1] ERROR [ 62%]
tests/test_settings.py::test_envar_not_including_a_dot_is_not_presented_to_elasticsearch[docker://elasticsearch1] ERROR [ 65%]
tests/test_settings.py::test_capitalized_envvar_is_not_presented_to_elasticsearch[docker://elasticsearch1] ERROR [ 68%]
tests/test_settings.py::test_setting_boostrap_memory_lock_with_an_environment_variable[docker://elasticsearch1] ERROR [ 71%]
tests/test_user.py::test_group_properties[docker://elasticsearch1] ERROR [ 75%]
tests/test_user.py::test_user_properties[docker://elasticsearch1] ERROR [ 78%]
tests/test_xpack_basic_index_crud.py::test_bootstrap_password_change[docker://elasticsearch1] SKIPPED [ 81%]
tests/test_xpack_basic_index_crud.py::test_create_index[docker://elasticsearch1] ERROR [ 84%]
tests/test_xpack_basic_index_crud.py::test_search[docker://elasticsearch1] ERROR [ 87%]
tests/test_xpack_basic_index_crud.py::test_delete_index[docker://elasticsearch1] ERROR [ 90%]
tests/test_xpack_basic_index_crud.py::test_search_on_nonexistent_index_fails[docker://elasticsearch1] ERROR [ 93%]
tests/test_xpack_basic_index_crud.py::test_cluster_is_healthy_after_indexing_data[docker://elasticsearch1] ERROR [ 96%]
tests/test_xpack_basic_index_crud.py::test_cgroup_os_stats_are_available[docker://elasticsearch1] ERROR [100%]
================================================ ERRORS ================================================
_________ ERROR at setup of test_es_can_write_to_bind_mounted_datadir[docker://elasticsearch1] _________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb73307b8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb72c92b0>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb7330eb8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb73307b8>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb7330eb8>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb73308d0>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb72c9080>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb72c92b0>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb73307b8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb72c92b0>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb7330eb8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb73307b8>
_stacktrace = <traceback object at 0xffffb72e5808>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb72c9080>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb72c92b0>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb7330780>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
ERROR at setup of test_es_can_write_to_bind_mounted_datadir_with_different_uid[docker://elasticsearch1]
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb717c080>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb717c2e8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb718e8d0>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb717c080>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb718e8d0>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb718e518>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb717c128>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb717c2e8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb717c080>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb717c2e8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb718e8d0>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb717c080>
_stacktrace = <traceback object at 0xffffb6efb708>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb717c128>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb717c2e8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb718e898>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
ERROR at setup of test_es_can_run_with_random_uid_and_write_to_bind_mounted_datadir[docker://elasticsearch1]
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f4b940>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f4b898>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6fa7c18>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f4b940>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6fa7c18>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6fa7e80>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6f4b780>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f4b898>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f4b940>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f4b898>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6fa7c18>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f4b940>
_stacktrace = <traceback object at 0xffffb6e28dc8>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6f4b780>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f4b898>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6fa7be0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
__________ ERROR at setup of test_IngestUserAgentPlugin_is_installed[docker://elasticsearch1] __________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb71eb0b8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb710>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb278>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb71eb0b8>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb278>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb128>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb71eb518>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb710>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb71eb0b8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb710>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb278>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb71eb0b8>
_stacktrace = <traceback object at 0xffffb6df7208>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb71eb518>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71eb710>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71eb240>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
____________ ERROR at setup of test_IngestGeoIpPlugin_is_installed[docker://elasticsearch1] ____________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb709ada0>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb709ad30>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70bc400>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb709ada0>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70bc400>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70bcfd0>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb709acf8>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb709ad30>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb709ada0>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb709ad30>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70bc400>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb709ada0>
_stacktrace = <traceback object at 0xffffb6eab4c8>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb709acf8>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb709ad30>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70bcda0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
________ ERROR at setup of test_elasticsearch_logs_are_in_docker_logs[docker://elasticsearch1] _________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb70ffb00>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70ff898>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70ffa90>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb70ffb00>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70ffa90>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f66400>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb7106390>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70ff898>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb70ffb00>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70ff898>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70ffa90>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb70ffb00>
_stacktrace = <traceback object at 0xffffb70eb908>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb7106390>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb70ff898>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb70ffcc0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
__________ ERROR at setup of test_info_level_logs_are_in_docker_logs[docker://elasticsearch1] __________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f5dd68>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f5d2e8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d3f198>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f5dd68>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d3f198>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d3f208>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6f5d470>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f5d2e8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f5dd68>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f5d2e8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d3f198>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6f5dd68>
_stacktrace = <traceback object at 0xffffb6f5ea08>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6f5d470>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f5d2e8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d3f048>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
___________________ ERROR at setup of test_process_is_pid_1[docker://elasticsearch1] ___________________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d79160>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d794a8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d790f0>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d79160>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d790f0>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d79240>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6d79518>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d794a8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d79160>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d794a8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d790f0>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d79160>
_stacktrace = <traceback object at 0xffffb6d7cf48>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6d79518>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d794a8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d792e8>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
________ ERROR at setup of test_process_is_running_as_the_correct_user[docker://elasticsearch1] ________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c997b8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e10278>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c998d0>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c997b8>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c998d0>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c99eb8>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6e10208>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e10278>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c997b8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e10278>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c998d0>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c997b8>
_stacktrace = <traceback object at 0xffffb6e14248>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6e10208>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e10278>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c99c88>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
________ ERROR at setup of test_process_is_running_the_correct_version[docker://elasticsearch1] ________
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d68860>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d68c18>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6dda668>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d68860>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6dda668>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6ddad30>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6d68748>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d68c18>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d68860>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d68c18>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6dda668>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d68860>
_stacktrace = <traceback object at 0xffffb6dd9c08>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6d68748>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d68c18>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6ddada0>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
____ ERROR at setup of test_setting_node_name_with_an_environment_variable[docker://elasticsearch1] ____
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6e00d30>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c7f240>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e00ba8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6e00d30>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e00ba8>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e00cf8>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6d6aa90>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c7f240>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6e00d30>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c7f240>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6e00ba8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6e00d30>
_stacktrace = <traceback object at 0xffffb6f7d248>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6d6aa90>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c7f240>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6e00198>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
__ ERROR at setup of test_setting_cluster_name_with_an_environment_variable[docker://elasticsearch1] ___
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c69400>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c69208>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71ba358>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c69400>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71ba358>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71baf28>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6c69f60>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c69208>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c69400>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c69208>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb71ba358>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c69400>
_stacktrace = <traceback object at 0xffffb6c65808>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6c69f60>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c69208>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb71ba550>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
____ ERROR at setup of test_setting_heapsize_with_an_environment_variable[docker://elasticsearch1] _____
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6bbfa20>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d494a8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6bbfcf8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6bbfa20>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6bbfcf8>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6bbfc88>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6d49198>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d494a8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6bbfa20>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d494a8>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6bbfcf8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6bbfa20>
_stacktrace = <traceback object at 0xffffb70f65c8>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6d49198>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d494a8>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6bbfac8>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
ERROR at setup of test_parameter_containing_underscore_with_an_environment_variable[docker://elasticsearch1]
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d102e8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d10128>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d10390>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d102e8>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d10390>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c600b8>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6d10eb8>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d10128>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d102e8>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d10128>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d10390>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d102e8>
_stacktrace = <traceback object at 0xffffb6dd9c48>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6d10eb8>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d10128>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6c609e8>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
ERROR at setup of test_envar_not_including_a_dot_is_not_presented_to_elasticsearch[docker://elasticsearch1]
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d0e978>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0ea58>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0e518>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d0e978>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0e518>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0e438>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6d0e6d8>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0ea58>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d0e978>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0ea58>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0e518>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6d0e978>
_stacktrace = <traceback object at 0xffffb6f28a08>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
@retry(**retry_settings)
def get(self, location='/', **kwargs):
return requests.get(self.url + location, auth=self.auth, **kwargs)
@retry(**retry_settings)
def put(self, location='/', **kwargs):
return requests.put(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def post(self, location='/%s/1' % default_index, **kwargs):
return requests.post(self.url + location, headers=http_api_headers, auth=self.auth, **kwargs)
@retry(**retry_settings)
def delete(self, location='/_all', **kwargs):
return requests.delete(self.url + location, auth=self.auth, **kwargs)
def get_root_page(self):
return self.get('/').json()
def get_cluster_health(self):
return self.get('/_cluster/health').json()
def get_node_count(self):
return self.get_cluster_health()['number_of_nodes']
def get_cluster_status(self):
return self.get_cluster_health()['status']
def get_node_os_stats(self):
"""Return an array of node OS statistics"""
return self.get('/_nodes/stats/os').json()['nodes'].values()
def get_node_plugins(self):
"""Return an array of node plugins"""
nodes = self.get('/_nodes/plugins').json()['nodes'].values()
return [node['plugins'] for node in nodes]
def get_node_thread_pool_bulk_queue_size(self):
"""Return an array of thread_pool bulk queue size settings for nodes"""
nodes = self.get('/_nodes?filter_path=**.thread_pool').json()['nodes'].values()
return [node['settings']['thread_pool']['bulk']['queue_size'] for node in nodes]
def get_node_jvm_stats(self):
"""Return an array of node JVM statistics"""
nodes = self.get('/_nodes/stats/jvm').json()['nodes'].values()
return [node['jvm'] for node in nodes]
def get_node_mlockall_state(self):
"""Return an array of the mlockall value"""
nodes = self.get('/_nodes?filter_path=**.mlockall').json()['nodes'].values()
return [node['process']['mlockall'] for node in nodes]
@retry(**retry_settings)
def set_password(self, username, password):
return self.put('/_xpack/security/user/%s/_password' % username,
json={"password": password})
def query_all(self, index=default_index):
return self.get('/%s/_search' % index)
def create_index(self, index=default_index):
return self.put('/' + index)
def delete_index(self, index=default_index):
return self.delete('/' + index)
def load_index_template(self):
template = {
'template': '*',
'settings': {
'number_of_shards': 2,
'number_of_replicas': 0,
}
}
return self.put('/_template/univeral_template', json=template)
def load_test_data(self):
self.create_index()
return self.post(
data=open('tests/testdata.json').read(),
params={"refresh": "wait_for"}
)
@retry(**retry_settings)
def assert_healthy(self):
if config.getoption('--single-node'):
assert self.get_node_count() == 1
assert self.get_cluster_status() in ['yellow', 'green']
else:
assert self.get_node_count() == 2
assert self.get_cluster_status() == 'green'
def uninstall_plugin(self, plugin_name):
# This will run on only one host, but this is ok for the moment
# TODO: as per http://testinfra.readthedocs.io/en/latest/examples.html#test-docker-images
uninstall_output = host.run(' '.join(["bin/elasticsearch-plugin",
"-s",
"remove",
"{}".format(plugin_name)]))
# Reset elasticsearch to its original state
self.reset()
return uninstall_output
def assert_bind_mount_data_dir_is_writable(self,
datadir1="tests/datadir1",
datadir2="tests/datadir2",
process_uid='',
datadir_uid=1000,
datadir_gid=0):
cwd = os.getcwd()
(datavolume1_path, datavolume2_path) = (os.path.join(cwd, datadir1),
os.path.join(cwd, datadir2))
config.option.mount_datavolume1 = datavolume1_path
config.option.mount_datavolume2 = datavolume2_path
# Yaml variables in docker-compose (`user:`) need to be a strings
config.option.process_uid = "{!s}".format(process_uid)
# Ensure defined data dirs are empty before tests
proc1 = delete_dir(datavolume1_path)
proc2 = delete_dir(datavolume2_path)
assert proc1.returncode == 0
assert proc2.returncode == 0
create_empty_dir(datavolume1_path, datadir_uid, datadir_gid)
create_empty_dir(datavolume2_path, datadir_uid, datadir_gid)
# Force Elasticsearch to re-run with new parameters
self.reset()
self.assert_healthy()
# Revert Elasticsearch back to its datadir defaults for the next tests
config.option.mount_datavolume1 = None
config.option.mount_datavolume2 = None
config.option.process_uid = ''
self.reset()
# Finally clean up the temp dirs used for bind-mounts
delete_dir(datavolume1_path)
delete_dir(datavolume2_path)
def es_cmdline(self):
return host.file("/proc/1/cmdline").content_string
def run_command_on_host(self, command):
return host.run(command)
def get_hostname(self):
return host.run('hostname').stdout.strip()
def get_docker_log(self):
proc = run(['docker-compose',
'-f',
'docker-compose-{}.yml'.format(config.getoption('--image-flavor')),
'logs',
self.get_hostname()],
stdout=PIPE)
return proc.stdout.decode()
def assert_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string in log
except AssertionError:
print(log)
raise
def assert_not_in_docker_log(self, string):
log = self.get_docker_log()
try:
assert string not in log
except AssertionError:
print(log)
raise
> return Elasticsearch()
tests/fixtures.py:222:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:33: in __init__
self.assert_healthy()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:132: in assert_healthy
assert self.get_node_count() == 1
tests/fixtures.py:69: in get_node_count
return self.get_cluster_health()['number_of_nodes']
tests/fixtures.py:66: in get_cluster_health
return self.get('/_cluster/health').json()
venv/lib/python3.6/site-packages/retrying.py:49: in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
venv/lib/python3.6/site-packages/retrying.py:212: in call
raise attempt.get()
venv/lib/python3.6/site-packages/retrying.py:247: in get
six.reraise(self.value[0], self.value[1], self.value[2])
venv/lib/python3.6/site-packages/six.py:693: in reraise
raise value
venv/lib/python3.6/site-packages/retrying.py:200: in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
tests/fixtures.py:48: in get
return requests.get(self.url + location, auth=self.auth, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:70: in get
return request('get', url, params=params, **kwargs)
venv/lib/python3.6/site-packages/requests/api.py:56: in request
return session.request(method=method, url=url, **kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
venv/lib/python3.6/site-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.adapters.HTTPAdapter object at 0xffffb6d0e6d8>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6d0ea58>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7+ versions, use buffering of HTTP
# responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 2.6 versions and back
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
> raise ConnectionError(e, request=request)
E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6d0e358>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/adapters.py:487: ConnectionError
_ ERROR at setup of test_capitalized_envvar_is_not_presented_to_elasticsearch[docker://elasticsearch1] _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
> (self.host, self.port), self.timeout, **extra_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:141:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as e:
err = e
if sock is not None:
sock.close()
sock = None
if err is not None:
> raise err
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:83:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
address = ('localhost', 9200), timeout = None, source_address = None, socket_options = [(6, 1, 1)]
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
if host.startswith('['):
host = host.strip('[]')
err = None
# Using the value from allowed_gai_family() in the context of getaddrinfo lets
# us select whether to work with IPv4 DNS records, IPv6 records, or both.
# The original create_connection function always returns all records.
family = allowed_gai_family()
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
> sock.connect(sa)
E ConnectionRefusedError: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py:73: ConnectionRefusedError
During handling of the above exception, another exception occurred:
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c0c898>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c27908>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f2d5f8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
> chunked=chunked)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:600:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c0c898>
conn = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>, method = 'GET'
url = '/_cluster/health'
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f2d5f8>, chunked = False
httplib_request_kw = {'body': None, 'headers': {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}}
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f2dd30>
def _make_request(self, conn, method, url, timeout=_Default, chunked=False,
**httplib_request_kw):
"""
Perform a request on a given urllib connection object taken from our
pool.
:param conn:
a connection from one of our connection pools
:param timeout:
Socket timeout in seconds for the request. This can be a
float or integer, which will set the same timeout value for
the socket connect and the socket read, or an instance of
:class:`urllib3.util.Timeout`, which gives you more fine-grained
control over your timeouts.
"""
self.num_requests += 1
timeout_obj = self._get_timeout(timeout)
timeout_obj.start_connect()
conn.timeout = timeout_obj.connect_timeout
# Trigger any extra validation we need to do.
try:
self._validate_conn(conn)
except (SocketTimeout, BaseSSLError) as e:
# Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
raise
# conn.request() calls httplib.*.request, not the method in
# urllib3.request. It also calls makefile (recv) on the socket.
if chunked:
conn.request_chunked(method, url, **httplib_request_kw)
else:
> conn.request(method, url, **httplib_request_kw)
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:356:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
def request(self, method, url, body=None, headers={}, *,
encode_chunked=False):
"""Send a complete request to the server."""
> self._send_request(method, url, body, headers, encode_chunked)
/usr/lib/python3.6/http/client.py:1239:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>, method = 'GET'
url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
encode_chunked = False
def _send_request(self, method, url, body, headers, encode_chunked):
# Honor explicitly requested Host: and Accept-Encoding: headers.
header_names = frozenset(k.lower() for k in headers)
skips = {}
if 'host' in header_names:
skips['skip_host'] = 1
if 'accept-encoding' in header_names:
skips['skip_accept_encoding'] = 1
self.putrequest(method, url, **skips)
# chunked encoding will happen if HTTP/1.1 is used and either
# the caller passes encode_chunked=True or the following
# conditions hold:
# 1. content-length has not been explicitly set
# 2. the body is a file or iterable, but not a str or bytes-like
# 3. Transfer-Encoding has NOT been explicitly set by the caller
if 'content-length' not in header_names:
# only chunk body if not explicitly set for backwards
# compatibility, assuming the client code is already handling the
# chunking
if 'transfer-encoding' not in header_names:
# if content-length cannot be automatically determined, fall
# back to chunked encoding
encode_chunked = False
content_length = self._get_content_length(body, method)
if content_length is None:
if body is not None:
if self.debuglevel > 0:
print('Unable to determine size of %r' % body)
encode_chunked = True
self.putheader('Transfer-Encoding', 'chunked')
else:
self.putheader('Content-Length', str(content_length))
else:
encode_chunked = False
for hdr, value in headers.items():
self.putheader(hdr, value)
if isinstance(body, str):
# RFC 2616 Section 3.7.1 says that text default has a
# default charset of iso-8859-1.
body = _encode(body, 'body')
> self.endheaders(body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>
message_body = None
def endheaders(self, message_body=None, *, encode_chunked=False):
"""Indicate that the last header line has been sent to the server.
This method sends the request to the server. The optional message_body
argument can be used to pass a message body associated with the
request.
"""
if self.__state == _CS_REQ_STARTED:
self.__state = _CS_REQ_SENT
else:
raise CannotSendHeader()
> self._send_output(message_body, encode_chunked=encode_chunked)
/usr/lib/python3.6/http/client.py:1234:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>
message_body = None, encode_chunked = False
def _send_output(self, message_body=None, encode_chunked=False):
"""Send the currently buffered request and clear the buffer.
Appends an extra \\r\\n to the buffer.
A message_body may be specified, to be appended to the request.
"""
self._buffer.extend((b"", b""))
msg = b"\r\n".join(self._buffer)
del self._buffer[:]
> self.send(msg)
/usr/lib/python3.6/http/client.py:1026:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>
data = b'GET /_cluster/health HTTP/1.1\r\nHost: localhost:9200\r\nUser-Agent: python-requests/2.13.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
def send(self, data):
"""Send `data' to the server.
``data`` can be a string object, a bytes object, an array object, a
file-like object that supports a .read() method, or an iterable object.
"""
if self.sock is None:
if self.auto_open:
> self.connect()
/usr/lib/python3.6/http/client.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>
def connect(self):
> conn = self._new_conn()
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:166:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>
def _new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
conn = connection.create_connection(
(self.host, self.port), self.timeout, **extra_kw)
except SocketTimeout as e:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
except SocketError as e:
raise NewConnectionError(
> self, "Failed to establish a new connection: %s" % e)
E requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>: Failed to establish a new connection: [Errno 111] Connection refused
venv/lib/python3.6/site-packages/requests/packages/urllib3/connection.py:150: NewConnectionError
During handling of the above exception, another exception occurred:
self = <requests.adapters.HTTPAdapter object at 0xffffb6c27c88>, request = <PreparedRequest [GET]>
stream = False, timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c27908>
verify = True, cert = None, proxies = OrderedDict()
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest <PreparedRequest>` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) <timeouts>` tuple.
:type timeout: float or tuple
:param verify: (optional) Whether to verify SSL certificates.
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
conn = self.get_connection(request.url, proxies)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {0}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
> timeout=timeout
)
venv/lib/python3.6/site-packages/requests/adapters.py:423:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c0c898>
method = 'GET', url = '/_cluster/health', body = None
headers = {'User-Agent': 'python-requests/2.13.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
retries = Retry(total=0, connect=None, read=False, redirect=None), redirect = False
assert_same_host = False
timeout = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6c27908>, pool_timeout = None
release_conn = False, chunked = False, body_pos = None
response_kw = {'decode_content': False, 'preload_content': False}, conn = None, release_this_conn = True
err = None, clean_exit = False
timeout_obj = <requests.packages.urllib3.util.timeout.Timeout object at 0xffffb6f2d5f8>
is_new_proxy_conn = False
def urlopen(self, method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=_Default,
pool_timeout=None, release_conn=None, chunked=False,
body_pos=None, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
the raw details.
.. note::
More commonly, it's appropriate to use a convenience method provided
by :class:`.RequestMethods`, such as :meth:`request`.
.. note::
`release_conn` will only behave as expected if
`preload_content=False` because we want to make
`preload_content=False` the default behaviour someday soon without
breaking backwards compatibility.
:param method:
HTTP request method (such as GET, POST, PUT, etc.)
:param body:
Data to send in the request body (useful for creating
POST requests, see HTTPConnectionPool.post_url for
more convenience).
:param headers:
Dictionary of custom headers to send, such as User-Agent,
If-None-Match, etc. If None, pool headers are used. If provided,
these headers completely replace any pool-specific headers.
:param retries:
Configure the number of retries to allow before raising a
:class:`~urllib3.exceptions.MaxRetryError` exception.
Pass ``None`` to retry until you receive a response. Pass a
:class:`~urllib3.util.retry.Retry` object for fine-grained control
over different types of retries.
Pass an integer number to retry connection errors that many times,
but no other types of errors. Pass zero to never retry.
If ``False``, then retries are disabled and any exception is raised
immediately. Also, instead of raising a MaxRetryError on redirects,
the redirect response will be returned.
:type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
:param redirect:
If True, automatically handle redirects (status codes 301, 302,
303, 307, 308). Each redirect counts as a retry. Disabling retries
will disable redirect, too.
:param assert_same_host:
If ``True``, will make sure that the host of the pool requests is
consistent else will raise HostChangedError. When False, you can
use the pool on an HTTP proxy and request foreign hosts.
:param timeout:
If specified, overrides the default timeout for this one
request. It may be a float (in seconds) or an instance of
:class:`urllib3.util.Timeout`.
:param pool_timeout:
If set and the pool is set to block=True, then this method will
block for ``pool_timeout`` seconds and raise EmptyPoolError if no
connection is available within the time period.
:param release_conn:
If False, then the urlopen call will not release the connection
back into the pool once a response is received (but will release if
you read the entire contents of the response such as when
`preload_content=True`). This is useful if you're not preloading
the response's content immediately. You will need to call
``r.release_conn()`` on the response ``r`` to return the connection
back into the pool. If None, it takes the value of
``response_kw.get('preload_content', True)``.
:param chunked:
If True, urllib3 will send the body using chunked transfer
encoding. Otherwise, urllib3 will send the body using the standard
content-length form. Defaults to False.
:param int body_pos:
Position to seek to in file-like body in the event of a retry or
redirect. Typically this won't need to be set because urllib3 will
auto-populate the value when needed.
:param \\**response_kw:
Additional parameters are passed to
:meth:`urllib3.response.HTTPResponse.from_httplib`
"""
if headers is None:
headers = self.headers
if not isinstance(retries, Retry):
retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
if release_conn is None:
release_conn = response_kw.get('preload_content', True)
# Check host
if assert_same_host and not self.is_same_host(url):
raise HostChangedError(self, url, retries)
conn = None
# Track whether `conn` needs to be released before
# returning/raising/recursing. Update this variable if necessary, and
# leave `release_conn` constant throughout the function. That way, if
# the function recurses, the original value of `release_conn` will be
# passed down into the recursive call, and its value will be respected.
#
# See issue #651 [1] for details.
#
# [1] <https://github.com/shazow/urllib3/issues/651>
release_this_conn = release_conn
# Merge the proxy headers. Only do this in HTTP. We have to copy the
# headers dict so we can safely change it without those changes being
# reflected in anyone else's copy.
if self.scheme == 'http':
headers = headers.copy()
headers.update(self.proxy_headers)
# Must keep the exception bound to a separate variable or else Python 3
# complains about UnboundLocalError.
err = None
# Keep track of whether we cleanly exited the except block. This
# ensures we do proper cleanup in finally.
clean_exit = False
# Rewind body position, if needed. Record current position
# for future rewinds in the event of a redirect/retry.
body_pos = set_file_position(body, body_pos)
try:
# Request a connection from the queue.
timeout_obj = self._get_timeout(timeout)
conn = self._get_conn(timeout=pool_timeout)
conn.timeout = timeout_obj.connect_timeout
is_new_proxy_conn = self.proxy is not None and not getattr(conn, 'sock', None)
if is_new_proxy_conn:
self._prepare_proxy(conn)
# Make the request on the httplib connection object.
httplib_response = self._make_request(conn, method, url,
timeout=timeout_obj,
body=body, headers=headers,
chunked=chunked)
# If we're going to release the connection in ``finally:``, then
# the response doesn't need to know about the connection. Otherwise
# it will also try to release it and we'll have a double-release
# mess.
response_conn = conn if not release_conn else None
# Pass method to Response for length checking
response_kw['request_method'] = method
# Import httplib's response into our own wrapper object
response = self.ResponseCls.from_httplib(httplib_response,
pool=self,
connection=response_conn,
retries=retries,
**response_kw)
# Everything went great!
clean_exit = True
except queue.Empty:
# Timed out by queue.
raise EmptyPoolError(self, "No pool connections are available.")
except (BaseSSLError, CertificateError) as e:
# Close the connection. If a connection is reused on which there
# was a Certificate error, the next request will certainly raise
# another Certificate error.
clean_exit = False
raise SSLError(e)
except SSLError:
# Treat SSLError separately from BaseSSLError to preserve
# traceback.
clean_exit = False
raise
except (TimeoutError, HTTPException, SocketError, ProtocolError) as e:
# Discard the connection for these exceptions. It will be
# be replaced during the next _get_conn() call.
clean_exit = False
if isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
e = ProxyError('Cannot connect to proxy.', e)
elif isinstance(e, (SocketError, HTTPException)):
e = ProtocolError('Connection aborted.', e)
retries = retries.increment(method, url, error=e, _pool=self,
> _stacktrace=sys.exc_info()[2])
venv/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py:649:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Retry(total=0, connect=None, read=False, redirect=None), method = 'GET', url = '/_cluster/health'
response = None
error = NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>: Failed to establish a new connection: [Errno 111] Connection refused',)
_pool = <requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0xffffb6c0c898>
_stacktrace = <traceback object at 0xffffb6c5ba88>
def increment(self, method=None, url=None, response=None, error=None,
_pool=None, _stacktrace=None):
""" Return a new Retry object with incremented retry counters.
:param response: A response object, or None, if the server did not
return a response.
:type response: :class:`~urllib3.response.HTTPResponse`
:param Exception error: An error encountered during the request, or
None if the response was received successfully.
:return: A new ``Retry`` object.
"""
if self.total is False and error:
# Disabled, indicate to re-raise the error.
raise six.reraise(type(error), error, _stacktrace)
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
elif error and self._is_read_error(error):
# Read retry?
if read is False or not self._is_method_retryable(method):
raise six.reraise(type(error), error, _stacktrace)
elif read is not None:
read -= 1
elif response and response.get_redirect_location():
# Redirect retry?
if redirect is not None:
redirect -= 1
cause = 'too many redirects'
redirect_location = response.get_redirect_location()
status = response.status
else:
# Incrementing because of a server error like a 500 in
# status_forcelist and a the given method is in the whitelist
cause = ResponseError.GENERIC_ERROR
if response and response.status:
cause = ResponseError.SPECIFIC_ERROR.format(
status_code=response.status)
status = response.status
history = self.history + (RequestHistory(method, url, error, status, redirect_location),)
new_retry = self.new(
total=total,
connect=connect, read=read, redirect=redirect,
history=history)
if new_retry.is_exhausted():
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
E requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /_cluster/health (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xffffb6f2dd68>: Failed to establish a new connection: [Errno 111] Connection refused',))
venv/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py:376: MaxRetryError
During handling of the above exception, another exception occurred:
host = <testinfra.host.Host object at 0xffffb739e898>
@fixture()
def elasticsearch(host):
class Elasticsearch():
bootstrap_pwd = "pleasechangeme"
def __init__(self):
self.url = 'http://localhost:9200'
if config.getoption('--image-flavor') == 'platinum':
self.auth = HTTPBasicAuth('elastic', Elasticsearch.bootstrap_pwd)
else:
self.auth = ''
self.assert_healthy()
self.process = host.process.get(comm='java')
# Start each test with a clean slate.
assert self.load_index_template().status_code == codes.ok
assert self.delete().status_code == codes.ok
def reset(self):
"""Reset Elasticsearch by destroying and recreating the containers."""
pytest_unconfigure(config)
pytest_configure(config)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment