Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
Multi-stage Dockerfile for an Elasticsearch container which includes Kibana settings
FROM${ELK_VERSION} AS elasticbase
# copy index files to container
COPY ./elastic/index-templates /home/elasticsearch/index-templates/
# copy management scripts to container - required to send index-files to Elastisearch REST endpoint later
COPY << management script - not supplied in this gist >>
# script to check elasticsearch status before uploading settings
# copy elasticsearch config to the correct location in the container
COPY --chown=elasticsearch:elasticsearch ./elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
# required to install kibana with rpm
COPY ./kibana.repo /etc/yum.repos.d/kibana.repo
# required to start kibana NOTE: this path is DIFFERENT FOR CONTAINERS
COPY --chown=1000:0 ./kibana-build.yml /etc/kibana/kibana.yml
# REQUIRED: to be able to start kibana serice with systemctl
COPY ./ /usr/bin/systemctl
RUN /usr/local/bin/ eswrapper & \
chmod 0775 && \
./ && \
echo "running template upload" && \
<< run management script here to upload index-templates to elasticsearch >> && \
echo "finished template upload"
RUN echo "install kibana" && \
rpm --import && \
yum install -y kibana
# required to start kibana NOTE: this path is DIFFERENT FOR CONTAINERS
COPY --chown=1000:0 ./kibana-build.yml /etc/kibana/kibana.yml
# copy all kibana visualizations/dashboards etc. for Kibana management script
COPY ./kibana/spaces/ /home/kibana/spaces/
RUN chmod +x /usr/bin/systemctl
RUN /bin/systemctl daemon-reload
# enable kibana and start - uses the modified systemd (python version) from earlier
RUN /bin/systemctl enable kibana.service && \
systemctl start kibana.service & \
/usr/local/bin/ eswrapper & \
<< upload all settings to the KIBANA REST enpoint using your management script >>
echo "finished kibana settings upload"
# final image
# copies the indexes from the previous container as final stage of multi-build dockerfile
COPY --from=elasticbase /usr/share/elasticsearch/data/ /usr/share/elasticsearch/data/
# POSSIBLE IMPROVEMENTS - this avoids the fact that 'copy data folder' may break between versions
# 1. run the saved objects API on the elasticbase image to properly export the .kibana_1 index & others
# 2. copy these exported items to the 'final image' using a shell script
# 3. run the saved objects API on the final image to import the indexes
# 4. delete the copied items
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.