Skip to content

Instantly share code, notes, and snippets.

@lauslim12
Last active August 30, 2022 15:31
Show Gist options
  • Save lauslim12/d4d459904ab37b668d085f4fc4671eda to your computer and use it in GitHub Desktop.
Save lauslim12/d4d459904ab37b668d085f4fc4671eda to your computer and use it in GitHub Desktop.
Set up your personal development server in your network with a Raspberry Pi!

Raspberry Development Server

This note is made to provide knowledge on how to set up your own Raspberry Pi for personal development on your own Wi-Fi network. It does not necessarily have to be a Raspberry Pi, you can do it with any server / instance on Cloud as well (EC2, Compute Engine, Droplets, and the like).

As this is made for development, this is not suitable for production environments due to performance, security, and efficiency issues. Please adjust accordingly if you want to use this in production environments.

Specifications

Personally, I am using a Raspberry Pi 4. It is equipped with 8GB RAM and I installed a 64GB MicroSD on it to act as its storage. My Raspberry Pi has the following software specifications:

$ hostnamectl
Static hostname: Nicholas-Raspberry-Pi-4
Icon name: computer
Machine ID: ***
Boot ID: ***
Operating System: Debian GNU/Linux 11 (bullseye)
Kernel: Linux 5.15.32-v8+
Architecture: arm64

$ lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye

Yes, I replaced Raspbian OS with Debian Server distribution.

Applications

Here are the applications / services / software packages that will be installed in this infrastructure.

  • dashboard, a full-stack web application to manage IoT devices and links. Exposed in the network with port 5000.
  • dynamodb, a local version of DynamoDB by AWS. Mapped to port 8080.
  • gcs, a local / fake Google Cloud Storage to test GCP Storage's Cloud Services. Mapped to port 4443.
  • mariadb, a better, open-source, and newer version of MySQL. Mapped to port 3306.
  • minio, an S3-compatible storage deployed in a dedicated port. Mapped to port 9000 (MinIO) and 9001 (MinIO Console).
  • mongodb, a document-based database with optional schemas. Mapped to port 27017.
  • nginx, a simple, highly-performant, and customizable reverse proxy / server. Mapped to port 80.
  • postgres, an alternative of MariaDB. Mapped to port 5432.
  • redis, a cache for high-performance operations. Mapped to port 6379.

Initial Setup

  • Use Raspberry Pi Imager in order to install your preferred operating system. I recommend Debian Server or Raspbian OS.
  • After you have successfully launched your Raspberry Pi, connect it to your network (Wi-Fi or anything). If you use Raspberry Pi Imager, this step will be done automatically, as you will be prompted to choose your network right before the image is burned into your Raspberry Pi's SD Card.
  • SSH from your machine into your instance / Raspberry Pi.
  • Create a new user with superuser privileges.
# Create user.
adduser mashu

# Add to sudo group.
usermod -aG sudo mashu

# Try to log in as the created user, and perform updates with sudo to ensure that it is working.
sudo apt update
  • Install Docker for Debian. Follow the steps written there. In case you want a script which conforms to their recommended way of installation, please refer to below script:
# Update packages.
sudo apt update

# Install required Docker dependencies.
sudo apt install ca-certificates curl gnupg lsb-release

# Add Docker's official GPG key.
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Set up repository.
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker engine.
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin

# After this process, you may see an error in a `systemd` process. If this happens, just
# reboot your instance / Raspberry Pi and you'll be fine.
sudo reboot

# Verify Docker is installed.
sudo docker run hello-world
  • Follow Docker's Linux Postinstall process to ensure everything is working properly:
# Create UNIX group.
sudo groupadd docker

# Add current user to that group.
sudo usermod -aG docker $USER

# It may be neccessary to relog for the changes to take effect. As an alternative, it might
# be possible to just run a command just to activate the changes to groups.
newgrp docker

# Verify that you can run Docker without 'sudo'.
docker run hello-world

# Activate Docker and Containerd on startup / boot.
sudo systemctl enable docker
sudo systemctl enable containerd
  • You are done. The next step is to provision infrastructures.

Infrastructure Setup

Infrastructure is divided into three files: docker-compose.yml, nginx.conf, and script.sh. Create those three files by typing:

mkdir -p ~/Projects/devserver
cd devserver
touch docker-compose.yml script.sh nginx.conf

After that, please input the following code into your respective files.

File: docker-compose.yml

docker helps you to launch your infrastructures separate from each other in containerized environment. This helps you to be free of dependencies in your server / Raspberry Pi. Docker will take care of the rest.

version: '3.9'

services:
  dashboard:
    container_name: dashboard
    build: https://github.com/lauslim12/raspberry-iot-dashboard.git#main
    restart: unless-stopped
    expose:
      - 5000
    environment:
      REDIS_HOST: redis
      REDIS_PORT: 6379
    healthcheck:
      test: ['CMD-SHELL', 'curl -f http://localhost:5000 || exit 1']
      interval: 30s
      timeout: 20s
      retries: 3

  dynamodb:
    container_name: dynamodb
    # Shell does not work in version 1.18, see: https://stackoverflow.com/questions/70535330/dynamodb-local-web-shell-does-not-load
    image: amazon/dynamodb-local:1.16.0
    restart: unless-stopped
    ports:
      - 8000:8000
    volumes:
      - ./dev-data/dynamodb-data:/home/dynamodblocal/data
    command: '-jar DynamoDBLocal.jar -sharedDb -dbPath /home/dynamodblocal/data/'
    healthcheck:
      test: ['CMD-SHELL', 'curl -f http://localhost:8000/shell/ || exit 1']
      interval: 30s
      timeout: 20s
      retries: 3

  gcs:
    container_name: googlecloudstorage
    image: fsouza/fake-gcs-server
    restart: unless-stopped
    command: -scheme http
    ports:
      - 4443:4443
    volumes:
      - ./dev-data/gcs-data:/storage
    healthcheck:
      test:
        [
          'CMD-SHELL',
          'wget --no-verbose --tries=1 --spider localhost:4443/_internal/healthcheck || exit 1',
        ]
      interval: 30s
      timeout: 20s
      retries: 3

  mariadb:
    container_name: mariadb
    image: mariadb:10.8.3
    restart: unless-stopped
    ports:
      - 3306:3306
    volumes:
      - ./dev-data/mariadb-data:/var/lib/mysql
    environment:
      MARIADB_ALLOW_EMPTY_ROOT_PASSWORD: true
      MARIADB_MYSQL_LOCALHOST_USER: true
    healthcheck:
      test:
        [
          'CMD',
          '/usr/local/bin/healthcheck.sh',
          '--su-mysql',
          '--connect',
          '--innodb_initialized',
        ]
      interval: 30s
      timeout: 20s
      retries: 3

  minio:
    container_name: minio
    image: quay.io/minio/minio:RELEASE.2022-06-10T16-59-15Z
    restart: unless-stopped
    ports:
      - 9000:9000
      - 9001:9001
    volumes:
      - ./dev-data/minio-data:/data
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    command: server ./data --console-address ":9001"
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live']
      interval: 30s
      timeout: 20s
      retries: 3

  mongodb:
    container_name: mongodb
    image: mongo:4.4.14
    restart: unless-stopped
    ports:
      - 27017:27017
    volumes:
      - ./dev-data/mongodb-data:/data/db
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: example
    healthcheck:
      test: echo 'db.runCommand({serverStatus:1}).ok' | mongo admin -u root -p example --quiet | grep 1
      interval: 30s
      timeout: 20s
      retries: 3

  nginx:
    container_name: nginx
    image: nginx:1.23.0-alpine
    hostname: nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    ports:
      - 80:80
    healthcheck:
      test: ['CMD-SHELL', 'wget -O /dev/null http://localhost || exit 1']
      interval: 30s
      timeout: 20s
      retries: 3

  postgres:
    container_name: postgres
    image: postgres:14.3
    restart: unless-stopped
    ports:
      - 5432:5432
    volumes:
      - ./dev-data/postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_HOST_AUTH_METHOD: trust
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready']
      interval: 30s
      timeout: 20s
      retries: 3

  redis:
    container_name: redis
    image: redis:7.0.1
    restart: unless-stopped
    ports:
      - 6379:6379
    volumes:
      - ./dev-data/redis-data:/data
    healthcheck:
      test: ['CMD', 'redis-cli', '--raw', 'incr', 'ping']
      interval: 30s
      timeout: 20s
      retries: 3

volumes:
  dynamodb-data:
  gcs-data:
  mariadb-data:
  minio-data:
  mongodb-data:
  postgres-data:
  redis-data:

File: nginx.conf

nginx.conf is used to allow you access to the nginx container. You can adjust it however you like.

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
  worker_connections 4096;
  multi_accept on;
}

http {
  ##
  # Basic Settings
  ##
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;

  ##
  # MIME Type
  ##
  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  ##
  # Logs and its format
  ##
  log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';
  access_log /var/log/nginx/access.log main;

  ##
  # GZIP Compression (HTTP 1.0 and HTTP 1.1)
  ##
  gzip on;
  gzip_vary on; # cache both the gzipped and regular version of a resource
  gzip_proxied any; # ensures all proxied request responses are gzipped
  gzip_comp_level 5; # compress up to level 5 for performance

  # gzip_buffers 16 8k;

  gzip_http_version 1.1; # enable compression for both HTTP 1.0/1.1
  gzip_min_length 256; # files smaller than 256 bytes would not be gzipped to prevent overhead
  gzip_types
    application/atom+xml
    application/javascript
    application/json
    application/rss+xml
    application/vnd.ms-fontobject
    application/x-font-ttf
    application/x-web-app-manifest+json
    application/xhtml+xml
    application/xml
    font/opentype
    image/svg+xml
    image/x-icon
    text/css
    text/plain
    text/x-component
    text/javascript
    text/xml;

  ##
  # Upstreams / port forwarding
  ##
  upstream dashboard {
    server dashboard:5000;
  }

  ##
  # Listen to port 80
  ##
  server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    # Disable proxy buffering for performance
    proxy_buffering off;
    proxy_redirect off;
    proxy_request_buffering off;

    location / {
      # Map to our dashboard
      proxy_pass http://dashboard;

      # Important headers, do not change these
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;

      # Default is HTTP/1, keep alive is only enabled in HTTP/1.1
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "";
    }
  }
}

File: script.sh

script.sh is a collection of functions to help you manage your development server conveniently. Please use sh script.sh help to get started.

#!/bin/bash

# Accesses a single docker container interactively with default Shell. We fetch the last variable in the
# shell input.
access() {
  for last; do true; done
  docker exec -it "$last" sh
}

# Cleans / resets your Docker environment.
clean() {
  docker rm -f $(docker ps -a -q)
  docker volume rm $(docker volume ls -q)
}

# Checks for free space in your Docker environment.
dockerfree() {
  docker system df
}

# Prints out the Help screen.
help() {
  echo "Usage: sh script.sh [ARG]"
  echo
  echo "Available arguments:"
  echo "access [CONTAINER_NAME/ID] - to access a running Docker container"
  echo "clean - to clean your environment"
  echo "dockerfree - to check the free spaces in your Docker environment"
  echo "help - to print out the help screen"
  echo "images - to list your Docker images"
  echo "ram - to check your disk free space"
  echo "remove [IMAGE_NAME/ID] - to remove Docker images from your machine"
  echo "processes - to list out all Docker processes in verbose way"
  echo "start - to start out this development server"
  echo "status - to list out Docker process in a simple format"
  echo "stop - to stop this development server"
  echo "teardown - to stop this development server, and clears the Docker volumes"
}

# Get all images in Docker.
images() {
  docker image ls
}

# Checks the processes of the development server.
processes() {
  docker ps
}

# Checks the free space in RAM.
ram() {
  free -h
}

# Removes a Docker image from the environment. We take the all of the arguments,
# with the exception of the first member of the arguments array.
remove() {
  first=$1
  shift

  for i
  do docker image rm "$i"
  done
}

# Starts the development server.
start() {
  docker compose up -d
}

# Checks the status of the development server, in layman terms.
status() {
  docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
}

# Stops the development server.
stop() {
  docker compose down
}

# Tears down the development server (stops server and clears the volume)
teardown() {
  docker compose down -v
}

# Main function to deliver all of the functionalities.
main() {
  if [ "$#" -eq 0 ]; then
    echo "You need to specify a single input variable, for example: 'sh script.sh ram'!"
    exit 1
  fi

  if [ "$1" = 'access' ]; then
    access "$@"
  elif [ "$1" = 'clean' ]; then
    clean
  elif [ "$1" = 'dockerfree' ]; then
    dockerfree
  elif [ "$1" = 'help' ]; then
    help
  elif [ "$1" = 'images' ]; then
    images
  elif [ "$1" = 'processes' ]; then
    processes
  elif [ "$1" = 'ram' ]; then
    ram
  elif [ "$1" = 'remove' ]; then
    remove "$@"
  elif [ "$1" = 'start' ]; then
    start
  elif [ "$1" = 'status' ]; then
    status
  elif [ "$1" = 'stop' ]; then
    stop
  elif [ "$1" = 'teardown' ]; then
    teardown
  else
    echo "Your argument is invalid! Please pick one argument that is available in 'sh script.sh help'!"
    exit 1
  fi
}

# Pass the input argument in the main function.
main "$@"
exit 0

File: devserver.system

Next step is to create a systemd service in your machine to make your development server survive restarts.

  • Type sudo nano /etc/systemd/system/devserver.service. Don't forget to adjust your User, Group, and WorkingDirectory.
[Unit]
Description=devserver - a personal development server with Docker
Documentation=https://gist.github.com/lauslim12/d4d459904ab37b668d085f4fc4671eda
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
User=mashu
Group=docker
WorkingDirectory=/home/mashu/Projects/devserver
ExecStartPre=sh script.sh stop
ExecStart=sh script.sh start
ExecStop=sh script.sh stop

[Install]
WantedBy=multi-user.target
  • Refresh systemd, activate the background process, and ensure our development server survives restarts: sudo systemctl daemon-reload, sudo systemctl enable devserver, sudo systemctl start devserver.

Finishing Up

You may need to do the following activities after you have successfully provisioned your development server:

  • Adjust your timezone. Use timedatectl to check for current time. It is recommended for your server to be at the same time zone as you are currently at.
  • Copy dotfiles for ease of usage. A recommended one would be to follow my provisions repository.
  • Install essential packages, such as git and build-essential.
  • Keep software packages updated: sudo apt update, sudo apt upgrade, sudo apt autoremove, sudo apt autoclean, and sudo apt clean.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment