Skip to content

Instantly share code, notes, and snippets.

@basoro
Created May 25, 2019 20:45
Show Gist options
  • Star 54 You must be signed in to star a gist
  • Fork 26 You must be signed in to fork a gist
  • Save basoro/b522864678a70b723de970c4272547c8 to your computer and use it in GitHub Desktop.
Save basoro/b522864678a70b723de970c4272547c8 to your computer and use it in GitHub Desktop.
Running Proxmox behind a single IP address
I ran into the battle of running all of my VMs and the host node under a single public IP address. Luckily, the host is just pure Debian, and ships with iptables.
What needs to be done is essentially to run all the VMs on a private internal network. Outbound internet access is done via NAT. Inbound access is via port forwarding.
Network configuration
Here’s how it’s done:
Create a virtual interface that serves as the gateway for your VMs:
My public interface (the one with the public IP assigned) is vmbr0. I will then create an alias interface called vmbr0:0 and give it a private IP address in /etc/network/interfaces. Note that this is needed for KVM and OpenVZ bridged interfaces; venet interfaces automagically work.
auto vmbr0:0
iface vmbr0:0 inet static
address 192.168.4.1
netmask 255.255.255.0
network 192.168.4.0
broadcast 192.168.4.255
Create an iptables rule to allow outbound traffic:
There are a few ways to specify this, but the most straightforward is:
iptables -t nat -A POSTROUTING -s 192.168.4.0/24 -o vmbr0 -j MASQUERADE
In one of your VMs, set the interface IP to something in 192.168.4.2-254, and set the default gateway to 192.168.4.1, with the subnet mask of 255.255.255.0. Feel free to adjust this as you see fit. Test pinging your public IP address, and perhaps even an external address (like 4.2.2.2). If this works, you’re on the right track.
At this point, you have internet access from your VMs, but how do you get to them? For your OpenVZ containers, sure, you could SSH into the host node and ‘vzctl enter’ into a CTID, but that’s probably not what you want. We will need to set iptables rules to dictate which ports point to which servers.
Assuming you want VM 100 to have SSH on port 10022, and let RDP of VM 101 ‘live’ on port 10189, we can do the following:
iptables -t nat -A PREROUTING -i vmbr0 -p tcp -m tcp --dport 10022 -j DNAT --to-destination 192.168.4.100:22
iptables -t nat -A PREROUTING -i vmbr0 -p tcp -m tcp --dport 10189 -j DNAT --to-destination 192.168.4.101:3389
You can add as many of these as you’d like.
Once you have your configuration set up as you please, we will need to make it persistent. If you reboot at this point, all of your iptables rules will be cleared. To prevent this, we simply do:
iptables-save > /etc/iptables.rules
This step saves the rules to an iptables-readable file. In order to apply them upon boot, you have several options. One of the easier ones is to modify /etc/network/interfaces as such (notice the third line):
auto vmbr0
iface vmbr0 inet static
pre-up iptables-restore < /etc/iptables.rules
address pu.bl.ic.ip
netmask 255.255.255.0
...
At this point, you now have a functioning inbound/outbound setup on your own private LAN.
Assigning public ports to containers
With multiple containers potentially running the same types of services, you can’t easily just map pu.bl.ic.ip:80 -> 192.168.4.100:80 and 192.168.4.101:80. Ports will collide, and you have to figure out the best way to work around that. The section below details how to perform host-header switching/proxying for websites, but for other services, there aren’t such elaborate solutions. SIP, for example, runs on port 5060. If you have two SIP servers (perhaps one for testing, one production), you’ll have to map things.
A port-numbering algorithm I came up with is:
(CTID mod 100) x 100 + original port number + 1000
For example, with container 105 that needs SIP:
(105 mod 100) x 100 + 5060 + 1000 = 500 + 5060 + 1000 = 6560
For FTP, port 22 on container 105:
(105 mod 100) x 100 + 22 + 1000 = 500 + 22 + 1000 = 1522
Your weights and offsets might need tweaking for your particular purposes; this is just what works for me.
Supporting multiple websites
Now, what if you want to install multiple websites across multiple containers? One easy way to do this is to do port forwarding so that, e.g., domain.com:1180 goes to container 101, domain.com:1280 goes to container 102, etc., but that’s ugly. We can instead setup a proxy that takes ALL requests on port 80 and routes them to their appropriate destinations. Let’s get started.
In this example, we’re going to have a dedicated container for nginx. I also have a dedicated container for a MySQL instance that’s shared for all of my sites. This allows the website containers to be very lightweight.
First, create a container using the OS of your choice, and enter it. I recommend using one of the minimal templates provided by openvz.org. View this post for information on how to install templates and create containers.
Here, we’ll be using the Ubuntu 14.04 template. Once you’re in, you’re now ready to install nginx.
Add official nginx repoShell
echo "deb http://nginx.org/packages/ubuntu/ `lsb_release -cs` nginx" | tee /etc/apt/sources.list.d/nginx.list
curl -L http://nginx.org/keys/nginx_signing.key | apt-key add -
apt-get update && apt-get install nginx
You’ll now have a default site, which you’ll probably want to change. This site will be served for any requests NOT matching a site name of anything else nginx serves (e.g., if a request for hello.ameir.net comes in, but nginx only knows to serve www.ameir.net, the default site would show up). Either change the default site, or delete it so that another site (the first config file nginx loads, in alphabetical order), is the default. You can prefix the config filename with something like 000- to ensure it’s the default. Alternatively, you can specify it in the config file, like listen 80 default_server; .
Now, for each site you want to proxy for, you’ll need a config file, as follows:
nginx config file (/etc/nginx/conf.d/ameir.net.conf)Apache
server {
listen 80;
server_name ameir.net *.ameir.net;
location / {
proxy_pass http://192.168.4.104;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 150;
proxy_send_timeout 100;
proxy_read_timeout 100;
proxy_buffers 4 32k;
client_max_body_size 8m;
client_body_buffer_size 128k;
}
}
Once you’ve created all of the config files, as shown above, simply restart nginx with service nginx restart .
Now, assuming your nginx container is container 101 with IP address 192.168.4.101, we can allow worldwide access as such:
Add and save iptables entryShell
iptables -t nat -A PREROUTING -i vmbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.4.101:80
iptables-save > /etc/iptables.rules
Now, once you point DNS, you should be good to go. If you’d like to test this prior, you can update your hosts file, or simply use curl to see if things are looking as expected:
curl testShell
curl -i -L -H "Host: ameir.net" pu.bl.ic.ip
I hope that helps!
@my-digital-plug
Copy link

You are dope. Just came across your stuff through this git repo and you have many others that address a number of issues I am working through or will be soon.

@uafaruqi
Copy link

Just what I needed. This is really helpful!

@sgtpepperaut
Copy link

thanks for writing this down!

@Actpohomoc
Copy link

Many thanks. I have to try your setup.

@damanti-me
Copy link

Very helpfull, thank you!

@xXDasGoGXx
Copy link

Hello! This worked awesome for me. However, I had to reformat etc and for the life of me I can not get it working again! The Proxmox shell can PING the newly created Container but the container can not ping its own new gateway or the Proxmox ip! Literally, everything I did verbatim.

@pedstm
Copy link

pedstm commented Apr 21, 2023

@xXDasGoGXx check @caraar12345's fork. his changes made it work for me :)

@kuklofon
Copy link

Спасибо!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment