Skip to content

Instantly share code, notes, and snippets.

@TaylorBurnham
Created July 8, 2023 15:49
Show Gist options
  • Save TaylorBurnham/dd340a7777d9b39dfacb4c46c5ce318b to your computer and use it in GitHub Desktop.
Save TaylorBurnham/dd340a7777d9b39dfacb4c46c5ce318b to your computer and use it in GitHub Desktop.

StarNet2 + PixInsight + GPU (CUDA) Accelleration on Ubuntu 22.04

These are the steps I took to enable GPU/CUDA accelleration for StarNet on my PixInsight installation. This will include steps to install a parallel version of CUDA v11.x to support Tensorflow 2.11 and PixInsight dependencies, but I did not install a second version of libcudnn8 since the version I have was fine.

Prerequisites

  • Ubuntu 22.04.2 / 5.19.0-46-generic
  • NVIDIA GeForce RTX 2070
    • nvidia-driver-535 - 535.54.03-0ubuntu1
    • cuda-drivers - 530.30.02-1
    • libcudnn8 - 8.9.2.26-1+cuda12.1
  • PixInsight x64 v1.8.9-1
    • Installed to /opt/PixInsight
    • StarNet2 v2.1.0 StarNet2_linux_2.1.0_tf_x64_install.zip

Steps

  1. Download and install StarNet2 as normal.

  2. Backup the existing Tensorflow installation.

    sudo mkdir /opt/PixInsight/.backup
    cd /opt/PixInsight
    
    sudo mv -v bin/lib/libtensorflow* /opt/PixInsight/.backup
    sudo mv -v include/tensorflow .backup/
  3. Download Tensorflow 2.11 and put it under /usr/local

    cd $(mktemp -d) && wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-linux-x86_64-2.11.0.tar.gz
    sudo mkdir /usr/local/libtensorflow-gpu-linux-x86_64-2.11.0/
    sudo tar xfvz libtensorflow-gpu-linux-x86_64-2.11.0.tar.gz -C /usr/local/libtensorflow-gpu-linux-x86_64-2.11.0/
  4. Download CUDA v11.4.4 and install the toolkit parallel to existing installations.

    wget https://developer.download.nvidia.com/compute/cuda/11.4.4/local_installers/cuda_11.4.4_470.82.01_linux.run
    sudo sh cuda_11.4.4_470.82.01_linux.run --silent --toolkit --toolkitpath=/usr/local/cuda-11.4.4
  5. Modify PixInsight.sh to use these search paths by adding these lines below the line LD_LIBRARY_PATH=$dirname/lib:$dirname

    TF_LIBRARY_PATH="/usr/local/libtensorflow-gpu-linux-x86_64-2.11.0/lib"
    CD_LIBRARY_PATH="/usr/local/cuda-11.4.4/lib64"
    
    if [ -d "${TF_LIBRARY_PATH}" ]; then
      echo "Adding ${TF_LIBRARY_PATH}"
      LD_LIBRARY_PATH="${TF_LIBRARY_PATH}:${LD_LIBRARY_PATH}"
    fi
    if [ -d "${CD_LIBRARY_PATH}" ]; then
      echo "Adding ${CD_LIBRARY_PATH}"
      LD_LIBRARY_PATH="${CD_LIBRARY_PATH}:${LD_LIBRARY_PATH}"
    fi
  6. Run PixInsight, open an image, run StarNet2 then monitor GPU utilization. If you ran PixInsight from the command line then you should see a message like this, but with your GPU name:

    2023-07-08 11:35:01.808834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 5810 MB memory: -> device: 0, name: NVIDIA GeForce RTX 2070, pci bus id: 0000:09:00.0, compute capability: 7.5

Sources

The following sources were used when building this step list.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment