Skip to content

Instantly share code, notes, and snippets.

@akaanirban
Created November 8, 2023 18:39
Show Gist options
  • Save akaanirban/a88e65c4529706b3c5892ddc1dd289db to your computer and use it in GitHub Desktop.
Save akaanirban/a88e65c4529706b3c5892ddc1dd289db to your computer and use it in GitHub Desktop.
Reverse Engineer docker image to build Dockerfile using docker history

The following is how you can sort of reverse engineer a docker image to get the Dockerfile using docker history. You can use a sophisticated tool like dive but that has its own problem.

Lets assume you have an image nvcr.io/nvidia/pytorch:23.10-py3

Run the following command to create a semi correct Dockerfile : docker history --no-trunc nvcr.io/nvidia/pytorch:23.10-py3 --format '{{ .CreatedBy }}' | tail -r > Dockerfile

The dockerfile:

/bin/sh -c #(nop)  ARG RELEASE
/bin/sh -c #(nop)  ARG LAUNCHPAD_BUILD_ARCH
/bin/sh -c #(nop)  LABEL org.opencontainers.image.ref.name=ubuntu
/bin/sh -c #(nop)  LABEL org.opencontainers.image.version=22.04
/bin/sh -c #(nop) ADD file:8540670760767f19eaf101fbce1da1881a2f24a7d65da6abdedc644b8fb00463 in / 
/bin/sh -c #(nop)  CMD ["/bin/bash"]
RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         apt-utils         build-essential         ca-certificates         curl         libncurses5         libncursesw5         patch         wget         rsync         unzip         jq         gnupg         libtcmalloc-minimal4 # buildkit
ARG CUDA_VERSION
ARG CUDA_DRIVER_VERSION
ARG JETPACK_HOST_MOUNTS
ENV CUDA_VERSION=12.2.2.009 CUDA_DRIVER_VERSION=535.104.05 CUDA_CACHE_DISABLE=1 NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
RUN |3 CUDA_VERSION=12.2.2.009 CUDA_DRIVER_VERSION=535.104.05 JETPACK_HOST_MOUNTS= /bin/sh -c /nvidia/build-scripts/installCUDA.sh # buildkit
RUN |3 CUDA_VERSION=12.2.2.009 CUDA_DRIVER_VERSION=535.104.05 JETPACK_HOST_MOUNTS= /bin/sh -c cp -vprd /nvidia/. /  &&  patch -p0 < /etc/startup_scripts.patch  &&  rm -f /etc/startup_scripts.patch # buildkit
ENV _CUDA_COMPAT_PATH=/usr/local/cuda/compat ENV=/etc/shinit_v2 BASH_ENV=/etc/bash.bashrc SHELL=/bin/bash NVIDIA_REQUIRE_CUDA=cuda>=9.0
LABEL com.nvidia.volumes.needed=nvidia_driver com.nvidia.cuda.version=9.0
ARG NCCL_VERSION
ARG CUBLAS_VERSION
ARG CUFFT_VERSION
ARG CURAND_VERSION
ARG CUSPARSE_VERSION
ARG CUSOLVER_VERSION
ARG CUTENSOR_VERSION
ARG NPP_VERSION
ARG NVJPEG_VERSION
ARG CUDNN_VERSION
ARG TRT_VERSION
ARG TRTOSS_VERSION
ARG NSIGHT_SYSTEMS_VERSION
ARG NSIGHT_COMPUTE_VERSION
ENV NCCL_VERSION=2.19.3 CUBLAS_VERSION=12.2.5.6 CUFFT_VERSION=11.0.8.103 CURAND_VERSION=10.3.3.141 CUSPARSE_VERSION=12.1.2.141 CUSOLVER_VERSION=11.5.2.141 CUTENSOR_VERSION=1.7.0.1 NPP_VERSION=12.2.1.4 NVJPEG_VERSION=12.2.2.4 CUDNN_VERSION=8.9.5.29 TRT_VERSION=8.6.1.6+cuda12.0.1.011 TRTOSS_VERSION=23.10 NSIGHT_SYSTEMS_VERSION=2023.3.1.92 NSIGHT_COMPUTE_VERSION=2023.2.2.3
RUN |17 CUDA_VERSION=12.2.2.009 CUDA_DRIVER_VERSION=535.104.05 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.19.3 CUBLAS_VERSION=12.2.5.6 CUFFT_VERSION=11.0.8.103 CURAND_VERSION=10.3.3.141 CUSPARSE_VERSION=12.1.2.141 CUSOLVER_VERSION=11.5.2.141 CUTENSOR_VERSION=1.7.0.1 NPP_VERSION=12.2.1.4 NVJPEG_VERSION=12.2.2.4 CUDNN_VERSION=8.9.5.29 TRT_VERSION=8.6.1.6+cuda12.0.1.011 TRTOSS_VERSION=23.10 NSIGHT_SYSTEMS_VERSION=2023.3.1.92 NSIGHT_COMPUTE_VERSION=2023.2.2.3 /bin/sh -c /nvidia/build-scripts/installNCCL.sh  && /nvidia/build-scripts/installLIBS.sh  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUTENSOR.sh # buildkit
LABEL com.nvidia.nccl.version=2.19.3 com.nvidia.cublas.version=12.2.5.6 com.nvidia.cufft.version=11.0.8.103 com.nvidia.curand.version=10.3.3.141 com.nvidia.cusparse.version=12.1.2.141 com.nvidia.cusolver.version=11.5.2.141 com.nvidia.cutensor.version=1.7.0.1 com.nvidia.npp.version=12.2.1.4 com.nvidia.nvjpeg.version=12.2.2.4 com.nvidia.cudnn.version=8.9.5.29 com.nvidia.tensorrt.version=8.6.1.6+cuda12.0.1.011 com.nvidia.tensorrtoss.version=23.10 com.nvidia.nsightsystems.version=2023.3.1.92 com.nvidia.nsightcompute.version=2023.2.2.3
ARG DALI_VERSION
ARG DALI_BUILD
ARG POLYGRAPHY_VERSION
ARG TRANSFORMER_ENGINE_VERSION
ENV DALI_VERSION=1.30.0 DALI_BUILD=9783408 POLYGRAPHY_VERSION=0.49.0 TRANSFORMER_ENGINE_VERSION=0.12
ADD docs.tgz / # buildkit
RUN |21 CUDA_VERSION=12.2.2.009 CUDA_DRIVER_VERSION=535.104.05 JETPACK_HOST_MOUNTS= NCCL_VERSION=2.19.3 CUBLAS_VERSION=12.2.5.6 CUFFT_VERSION=11.0.8.103 CURAND_VERSION=10.3.3.141 CUSPARSE_VERSION=12.1.2.141 CUSOLVER_VERSION=11.5.2.141 CUTENSOR_VERSION=1.7.0.1 NPP_VERSION=12.2.1.4 NVJPEG_VERSION=12.2.2.4 CUDNN_VERSION=8.9.5.29 TRT_VERSION=8.6.1.6+cuda12.0.1.011 TRTOSS_VERSION=23.10 NSIGHT_SYSTEMS_VERSION=2023.3.1.92 NSIGHT_COMPUTE_VERSION=2023.2.2.3 DALI_VERSION=1.30.0 DALI_BUILD=9783408 POLYGRAPHY_VERSION=0.49.0 TRANSFORMER_ENGINE_VERSION=0.12 /bin/sh -c echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf  && echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf # buildkit
ARG _LIBPATH_SUFFIX
ENV PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
COPY entrypoint/ /opt/nvidia/ # buildkit
ENV NVIDIA_PRODUCT_NAME=CUDA
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
COPY NVIDIA_Deep_Learning_Container_License.pdf /workspace/ # buildkit
RUN /bin/sh -c export DEBIAN_FRONTEND=noninteractive  && apt-get update  && apt-get install -y --no-install-recommends         build-essential         git         libglib2.0-0         less         libnl-route-3-200         libnl-3-dev         libnl-route-3-dev         libnuma-dev         libnuma1         libpmi2-0-dev         nano         numactl         openssh-client         vim         wget  && rm -rf /var/lib/apt/lists/* # buildkit
ARG GDRCOPY_VERSION
ARG HPCX_VERSION
ARG RDMACORE_VERSION
ARG MOFED_VERSION=5.4-rdmacore39.0
ARG OPENUCX_VERSION
ARG OPENMPI_VERSION
ENV GDRCOPY_VERSION=2.3 HPCX_VERSION=2.16rc4 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.15.0 OPENMPI_VERSION=4.1.5rc2 RDMACORE_VERSION=39.0
ARG TARGETARCH
RUN |7 GDRCOPY_VERSION=2.3 HPCX_VERSION=2.16rc4 RDMACORE_VERSION=39.0 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.15.0 OPENMPI_VERSION=4.1.5rc2 TARGETARCH=arm64 /bin/sh -c cd /nvidia  && ( export DEBIAN_FRONTEND=noninteractive        && apt-get update                            && apt-get install -y --no-install-recommends              libibverbs1                                  libibverbs-dev                               librdmacm1                                   librdmacm-dev                                libibumad3                                   libibumad-dev                                ibverbs-utils                                ibverbs-providers                     && rm -rf /var/lib/apt/lists/*               && rm $(dpkg-query -L                                    libibverbs-dev                               librdmacm-dev                                libibumad-dev                            | grep "\(\.so\|\.a\)$")          )                                            && ( cd opt/gdrcopy/                              && dpkg -i libgdrapi_*.deb                   )                                         && ( cp -r opt/hpcx /opt/                                         && cp etc/ld.so.conf.d/hpcx.conf /etc/ld.so.conf.d/          && ln -sf /opt/hpcx/ompi /usr/local/mpi                      && ln -sf /opt/hpcx/ucx  /usr/local/ucx                      && sed -i 's/^\(hwloc_base_binding_policy\) = core$/\1 = none/' /opt/hpcx/ompi/etc/openmpi-mca-params.conf         && sed -i 's/^\(btl = self\)$/#\1/'                             /opt/hpcx/ompi/etc/openmpi-mca-params.conf       )                                                         && ldconfig # buildkit
ENV OPAL_PREFIX=/opt/hpcx/ompi PATH=/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin
ENV OMPI_MCA_coll_hcoll_enable=0
COPY cuda-*.patch /tmp # buildkit
RUN |7 GDRCOPY_VERSION=2.3 HPCX_VERSION=2.16rc4 RDMACORE_VERSION=39.0 MOFED_VERSION=5.4-rdmacore39.0 OPENUCX_VERSION=1.15.0 OPENMPI_VERSION=4.1.5rc2 TARGETARCH=arm64 /bin/sh -c export DEVEL=1 BASE=0  && /nvidia/build-scripts/installNCU.sh  && /nvidia/build-scripts/installCUDA.sh  && /nvidia/build-scripts/installLIBS.sh  && /nvidia/build-scripts/installNCCL.sh  && /nvidia/build-scripts/installCUDNN.sh  && /nvidia/build-scripts/installCUTENSOR.sh  && /nvidia/build-scripts/installTRT.sh  && /nvidia/build-scripts/installNSYS.sh  && if [ -f "/tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch" ]; then patch -p0 < /tmp/cuda-${_CUDA_VERSION_MAJMIN}.patch; fi  && rm -f /tmp/cuda-*.patch # buildkit
ENV LIBRARY_PATH=/usr/local/cuda/lib64/stubs:
ENV NVIDIA_PRODUCT_NAME=PyTorch
ARG NVIDIA_PYTORCH_VERSION
ARG PYTORCH_BUILD_VERSION
ENV PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 PYTORCH_VERSION=2.1.0a0+32f93b1 PYTORCH_BUILD_NUMBER=0 NVIDIA_PYTORCH_VERSION=23.10
LABEL com.nvidia.pytorch.version=2.1.0a0+32f93b1
ARG TARGETARCH
ARG PYVER=3.10
ARG L4T=0
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c export PYSFX=`echo "$PYVER" | cut -c1-1` &&     export DEBIAN_FRONTEND=noninteractive &&     apt-get update &&     apt-get install -y --no-install-recommends         python$PYVER-dev         python$PYSFX         python$PYSFX-dev         python$PYSFX-distutils         python-is-python$PYSFX         autoconf         automake         libatlas-base-dev         libgoogle-glog-dev         libbz2-dev         libleveldb-dev         liblmdb-dev         libprotobuf-dev         libsnappy-dev         libtool         nasm         protobuf-compiler         pkg-config         unzip         sox         libsndfile1         libpng-dev         libhdf5-103         libhdf5-dev         gfortran         rapidjson-dev         ninja-build         libedit-dev         build-essential         patchelf      && rm -rf /var/lib/apt/lists/* # buildkit
ENV PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
ENV SETUPTOOLS_USE_DISTUTILS=stdlib
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c curl -O https://bootstrap.pypa.io/get-pip.py &&     python get-pip.py &&     rm get-pip.py # buildkit
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir pip setuptools &&     pip install --no-cache-dir cmake # buildkit
WORKDIR /opt
ENV OPENBLAS_VERSION=0.3.23
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c wget -q -O - https://github.com/xianyi/OpenBLAS/archive/refs/tags/v${OPENBLAS_VERSION}.tar.gz | tar -xzf - &&     cd OpenBLAS-${OPENBLAS_VERSION} &&     time make FC=gfortran USE_OPENMP=1 -j &&     time make PREFIX=/usr/local install &&     cd ../ &&     rm -rf OpenBLAS-${OPENBLAS_VERSION} # buildkit
WORKDIR /opt/pytorch
COPY . . # buildkit
ENV PYTHONIOENCODING=utf-8
ENV LC_ALL=C.UTF-8
ENV PIP_DEFAULT_TIMEOUT=100
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir         numpy==1.22.2         scipy==1.8.1         "PyYAML>=5.4.1"         astunparse         typing_extensions         cffi         spacy         mock         tqdm         librosa==0.9.2         expecttest==0.1.3         hypothesis==5.35.1         xdoctest==1.0.2         pytest         pytest-xdist         pytest-rerunfailures         pytest-shard         pytest-flakefinder         pybind11         Cython         "regex>=2020.1.8"         protobuf==3.20.1 &&     if [[ $TARGETARCH = "amd64" ]] ; then pip install --no-cache-dir mkl==2021.1.1 mkl-include==2021.1.1 mkl-devel==2021.1.1 ; fi # buildkit
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c find /usr/local/lib -type f -name "libtbb*" ! -regex '.*/libtbb.*\.so\.[0-9]*' -exec rm {} \; # buildkit
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c git config --global url."https://github".insteadOf git://github &&     pip install --no-cache-dir notebook==6.4.10 jupyterlab==2.3.2 python-hostlist traitlets==5.9.0 &&     pip install --no-cache-dir tensorboard==2.9.0 # buildkit
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c PATCHED_FILE=$(python -c "from tensorboard.plugins.core import core_plugin as _; print(_.__file__)") &&     sed -i 's/^\( *"--bind_all",\)$/\1 default=True,/' "$PATCHED_FILE" &&     test $(grep '^ *"--bind_all", default=True,$' "$PATCHED_FILE" | wc -l) -eq 1 # buildkit
ENV NVM_DIR=/usr/local/nvm
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c pip install --disable-pip-version-check --no-cache-dir git+https://github.com/cliffwoolley/jupyter_tensorboard.git@0.2.0+nv21.03  && mkdir -p $NVM_DIR  && curl -Lo- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.2/install.sh | bash  && source "$NVM_DIR/nvm.sh"  && nvm install 16.20.2 node  && jupyter labextension install jupyterlab_tensorboard  && jupyter serverextension enable jupyterlab  && pip install --no-cache-dir jupytext  && jupyter labextension install jupyterlab-jupytext@1.2.2  && ( cd /usr/local/share/jupyter/lab/staging       && npm prune --production )  && npm cache clean --force  && rm -rf /usr/local/share/.cache  && echo "source $NVM_DIR/nvm.sh" >> /etc/bash.bashrc  && mv /root/.jupyter/jupyter_notebook_config.json /usr/local/etc/jupyter/  && jupyter lab clean # buildkit
COPY jupyter_notebook_config.py /usr/local/etc/jupyter/ # buildkit
ENV JUPYTER_PORT=8888
ENV TENSORBOARD_PORT=6006
EXPOSE map[8888/tcp:{}]
EXPOSE map[6006/tcp:{}]
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c OPENCV_VERSION=4.7.0 &&     cd / &&     wget -q -O - https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.tar.gz | tar -xzf - &&     cd /opencv-${OPENCV_VERSION} &&     cmake -GNinja -Bbuild -H.           -DWITH_CUDA=OFF -DWITH_1394=OFF           -DPYTHON3_PACKAGES_PATH="/usr/local/lib/python${PYVER}/dist-packages"           -DBUILD_opencv_cudalegacy=OFF -DBUILD_opencv_stitching=OFF -DWITH_IPP=OFF -DWITH_PROTOBUF=OFF &&     cmake --build build --target install &&     cd modules/python/package &&     pip install --no-cache-dir --disable-pip-version-check -v . &&     rm -rf /opencv-${OPENCV_VERSION} # buildkit
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c cd magma-cuda &&     cmake -H. -Bbuild -DUSE_FORTRAN=OFF -DGPU_TARGET="All" -DBUILD_SHARED_LIBS=OFF -DCMAKE_CXX_FLAGS="-fPIC" -DCMAKE_C_FLAGS="-fPIC" -DCUDA_NVCC_FLAGS="-Xfatbin;-compress-all;-DHAVE_CUBLAS;-std=c++11;--threads=0;" -GNinja &&     cmake --build build --target install &&     rm -r ./build # buildkit
ENV UCC_CL_BASIC_TLS=^sharp
ENV TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
ENV PYTORCH_HOME=/opt/pytorch/pytorch
ENV CUDA_HOME=/usr/local/cuda
ENV TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
ENV USE_EXPERIMENTAL_CUDNN_V8_API=1
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c mkdir -p /tmp/pip/     && cp /opt/transfer/torch*.whl /tmp/pip/.     && pip install /tmp/pip/torch*.whl     && patchelf --set-rpath '/usr/local/lib' /usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_global_deps.so # buildkit
RUN |5 NVIDIA_PYTORCH_VERSION=23.10 PYTORCH_BUILD_VERSION=2.1.0a0+32f93b1 TARGETARCH=arm64 PYVER=3.10 L4T=0 /bin/sh -c pip install --no-cache-dir -v -r /opt/pytorch/pytorch/requirements.txt # buildkit
ARG TARGETARCH
RUN /bin/sh -c if [ -z "${DALI_VERSION}" ] ; then   echo "Not Installing DALI for L4T Build." ; else   export DALI_PKG_SUFFIX="cuda${CUDA_VERSION%%.*}0"   && pip install --disable-pip-version-check --no-cache-dir                 --extra-index-url https://developer.download.nvidia.com/compute/redist                 --extra-index-url http://sqrl/dldata/pip-dali${DALI_URL_SUFFIX:-} --trusted-host sqrl         nvidia-dali-${DALI_PKG_SUFFIX}==${DALI_VERSION}; fi # buildkit
ENV COCOAPI_VERSION=2.0+nv0.7.3
RUN /bin/sh -c export COCOAPI_TAG=$(echo ${COCOAPI_VERSION} | sed 's/^.*+n//')  && pip install --disable-pip-version-check --no-cache-dir git+https://github.com/nvidia/cocoapi.git@${COCOAPI_TAG}#subdirectory=PythonAPI # buildkit
COPY singularity/ /.singularity.d/ # buildkit
RUN /bin/sh -c ( cd vision && CFLAGS="-g0" FORCE_CUDA=1 NVCC_APPEND_FLAGS="--threads 8" pip install --no-cache-dir --no-build-isolation --disable-pip-version-check . )  && ( cd vision && cmake -Bbuild -H. -GNinja -DWITH_CUDA=1 -DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'` && cmake --build build --target install && rm -rf build )  && ( cd fuser && pip install -r requirements.txt &&  python setup.py install && python setup.py clean)  && ( cd apex && CFLAGS="-g0" NVCC_APPEND_FLAGS="--threads 8" pip install -v --no-build-isolation --no-cache-dir --disable-pip-version-check --config-settings "--build-option=--cpp_ext --cuda_ext --bnp --xentropy --deprecated_fused_adam --deprecated_fused_lamb --fast_multihead_attn --distributed_lamb --fast_layer_norm --transducer --distributed_adam --fmha --fast_bottleneck --nccl_p2p --peer_memory --permutation_search --focal_loss --fused_conv_bias_relu --index_mul_2d --cudnn_gbn --group_norm" . )  && ( cd data && pip install --no-build-isolation --no-cache-dir --disable-pip-version-check --no-deps -v . )  && ( cd text && export TORCHDATA_VERSION="$(python -c 'import torchdata; print(torchdata.__version__)')" && pip install --no-build-isolation --no-cache-dir --disable-pip-version-check --no-deps -v . && unset TORCHDATA_VERSION )  && ( cd pytorch/third_party/onnx && pip uninstall typing -y && CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" pip install --no-build-isolation --no-cache-dir --disable-pip-version-check . ) # buildkit
RUN /bin/sh -c pip uninstall -y pillow  && cd /tmp  && git clone https://github.com/uploadcare/pillow-simd  && cd pillow-simd  && git fetch --all --tags --prune  && git checkout tags/9.2.0  && sed -i 's/DEBUG = False/DEBUG = True/' setup.py  && patch -p1 < /opt/pytorch/pil_9.3.0_CVE-2022-45199.patch  && if [[ $TARGETARCH = "amd64" ]] ; then CC="cc -mavx" pip install --no-cache-dir --disable-pip-version-check  . ; fi  && if [[ $TARGETARCH = "arm64" ]] ; then pip install --no-cache-dir --disable-pip-version-check  . ; fi  && rm -rf ../pillow-simd # buildkit
RUN /bin/sh -c pip install --no-cache-dir --disable-pip-version-check tabulate # buildkit
RUN /bin/sh -c if [ "${L4T}" = "1" ]; then     echo "Not installing rapids for L4T build." ; else     find /rapids  -name "*-Linux.tar.gz" -exec     tar -C /usr --exclude="*.a" --exclude="bin/xgboost" --strip-components=1 -xvf {} \;  && find /rapids -name "*.whl"     ! -name "Pillow-*"     ! -name "certifi-*"     ! -name "protobuf-*" -exec     pip install --no-cache-dir {} +  && pip install --no-cache-dir networkx==2.6.3  && rm $(pip show xgboost | grep Location | awk '{print $2}')/xgboost/lib/libxgboost.so; fi # buildkit
WORKDIR /workspace
COPY NVREADME.md README.md # buildkit
COPY docker-examples docker-examples # buildkit
COPY examples examples # buildkit
COPY tutorials tutorials # buildkit
RUN /bin/sh -c chmod -R a+w . # buildkit
RUN /bin/sh -c set -x  && URL=$(VERIFY=1 /nvidia/build-scripts/installTRT.sh | sed -n "s/^.*\(http.*\)tar.*$/\1/p")tar  && FILE=$(wget -O - $URL | sed -n 's/^.*href="\(TensorRT[^"]*\)".*$/\1/p' | egrep -v "internal|safety")  && wget $URL/$FILE -O - | tar -xz  && PY=$(python -c 'import sys; print(str(sys.version_info[0])+str(sys.version_info[1]))')  && pip install TensorRT-*/python/tensorrt-*-cp$PY*.whl  && pip install TensorRT-*/graphsurgeon/graphsurgeon-*.whl  && pip install TensorRT-*/uff/uff-*.whl  && mv /usr/src/tensorrt /opt  && ln -s /opt/tensorrt /usr/src/tensorrt  && rm -r TensorRT-*  && UFF_PATH=$(pip show uff | sed -n 's/Location: \(.*\)$/\1/p')/uff  && sed -i 's/from tensorflow import GraphDef/from tensorflow.python import GraphDef/'     $UFF_PATH/converters/tensorflow/conversion_helpers.py  && chmod +x ${UFF_PATH}/bin/convert_to_uff.py  && ln -sf ${UFF_PATH}/bin/convert_to_uff.py /usr/local/bin/convert-to-uff # buildkit
ENV PATH=/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
RUN /bin/sh -c pip --version && python -c 'import sys; print(sys.platform)'     && pip install --no-cache-dir nvidia-pyindex     && if [ "${L4T}" = "1" ]; then pip install polygraphy; else       pip install --extra-index-url https://urm.nvidia.com/artifactory/api/pypi/sw-tensorrt-pypi/simple --no-cache-dir polygraphy==${POLYGRAPHY_VERSION}; fi     && pip install --extra-index-url http://sqrl/dldata/pip-simple --trusted-host sqrl --no-cache-dir pytorch-quantization==2.1.2 # buildkit
COPY torch_tensorrt/ /opt/pytorch/torch_tensorrt/ # buildkit
ARG PYVER
RUN |1 PYVER=3.10 /bin/sh -c pip install --no-cache-dir /opt/pytorch/torch_tensorrt/dist/*.whl # buildkit
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
ENV PATH=/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
RUN |1 PYVER=3.10 /bin/sh -c env MAX_JOBS=4 pip install flash-attn==2.0.4 # buildkit
RUN |1 PYVER=3.10 /bin/sh -c if [ "${L4T}" = "1" ]; then echo "Not installing Transformer Engine in iGPU container until Version variable is set"; else     pip install --no-cache-dir --no-build-isolation git+https://github.com/NVIDIA/TransformerEngine.git@release_v${TRANSFORMER_ENGINE_VERSION}; fi # buildkit
ENV TORCH_CUDNN_V8_API_ENABLED=1
ENV CUDA_MODULE_LOADING=LAZY
RUN |1 PYVER=3.10 /bin/sh -c ln -sf ${_CUDA_COMPAT_PATH}/lib.real ${_CUDA_COMPAT_PATH}/lib  && echo ${_CUDA_COMPAT_PATH}/lib > /etc/ld.so.conf.d/00-cuda-compat.conf  && ldconfig  && rm -f ${_CUDA_COMPAT_PATH}/lib # buildkit
COPY entrypoint.d/ /opt/nvidia/entrypoint.d/ # buildkit
ARG NVIDIA_BUILD_ID
ENV NVIDIA_BUILD_ID=71412639
LABEL com.nvidia.build.id=71412639
ARG NVIDIA_BUILD_REF
LABEL com.nvidia.build.ref=798008b068e6dbd0088bab08098b0fce963b87b3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment