Skip to content

Instantly share code, notes, and snippets.

@rda0
Last active February 25, 2019 17:47
Show Gist options
  • Select an option

  • Save rda0/2edc01632578b0bd59b8c486f8a94989 to your computer and use it in GitHub Desktop.

Select an option

Save rda0/2edc01632578b0bd59b8c486f8a94989 to your computer and use it in GitHub Desktop.
Script to create tc filters and classes
#!/bin/bash
dev="eth0"
ip_local="10.8.0.0"
cut_ip_local() {
if [ -n "$ip_local" ]; then
ip_local_byte1=`echo "$ip_local" | cut -d. -f1`
ip_local_byte2=`echo "$ip_local" | cut -d. -f2`
fi
}
create_identifiers() {
if [ -n "$1" ]; then
ip_byte3=`echo "$1" | cut -d. -f3`
handle=`printf "%x\n" "$ip_byte3"`
ip_byte4=`echo "$1" | cut -d. -f4`
hash=`printf "%x\n" "$ip_byte4"`
classid=`printf "%x\n" $((256*ip_byte3+ip_byte4))`
fi
}
start_tc() {
cut_ip_local
tc qdisc add dev "$dev" root handle 1: htb
tc filter add dev "$dev" parent 1:0 prio 1 protocol ip u32
tc filter add dev "$dev" parent 1:0 prio 1 handle 2: protocol ip u32 divisor 256
tc filter add dev "$dev" parent 1:0 prio 1 protocol ip u32 ht 800:: \
match ip dst "${ip_local_byte1}"."${ip_local_byte2}".0.0/16 \
hashkey mask 0x000000ff at 16 link 2:
modprobe ifb numifbs=1
ip link set dev ifb0 up
tc qdisc add dev "$dev" handle ffff: ingress
tc filter add dev "$dev" parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0
tc qdisc add dev ifb0 root handle 1: htb
tc filter add dev ifb0 parent 1:0 prio 1 protocol ip u32
tc filter add dev ifb0 parent 1:0 prio 1 handle 3: protocol ip u32 divisor 256
tc filter add dev ifb0 parent 1:0 prio 1 protocol ip u32 ht 800:: \
match ip src "${ip_local_byte1}"."${ip_local_byte2}".0.0/16 \
hashkey mask 0x000000ff at 12 link 3:
}
stop_tc() {
tc qdisc del dev "$dev" root
tc qdisc del dev "$dev" handle ffff: ingress
tc qdisc del dev ifb0 root
ip link set dev ifb0 down
rmmod ifb
}
function add_ip() {
user=$1
ip=$2
create_identifiers $ip
if [ "$user" == "admin" ]; then
downrate=10mbit
uprate=10mbit
elif [ "$user" == "client" ]; then
downrate=1200kbit
uprate=1200kbit
else
echo "error: unknown user"
exit 1
fi
# Limit traffic from VPN server to client
tc class add dev "$dev" parent 1: classid 1:"$classid" htb rate "$downrate"
tc filter add dev "$dev" parent 1:0 protocol ip prio 1 \
handle 2:"${hash}":"${handle}" \
u32 ht 2:"${hash}": match ip dst "$ip"/32 flowid 1:"$classid"
# Limit traffic from client to VPN server
tc class add dev ifb0 parent 1: classid 1:"$classid" htb rate "$uprate"
tc filter add dev ifb0 parent 1:0 protocol ip prio 1 \
handle 3:"${hash}":"${handle}" \
u32 ht 3:"${hash}": match ip src "$ip"/32 flowid 1:"$classid"
}
case "$1" in
start)
start_tc
;;
stop)
stop_tc
;;
add)
add_ip $2 $3
;;
*)
echo "$0: unknown operation [$1]" >&2
exit 1
;;
esac
exit 0
@rda0
Copy link
Copy Markdown
Author

rda0 commented Feb 7, 2019

As long as the openvpn setup is working with the certificates, ip addresses, and so on, this script could be used to just define the TC classes and filters for all ip addresses that are in use by openvpn.

The script has 3 functions:

- start()
- stop()
- add_ip()

Assuming you name the script `tc.sh` you can incoke it with 
the following parameters.

This prepares TC to add classes and filters:

  ./tc.sh start

This removes everything (all classes and filters):

  ./tc.sh stop

After TC is started you can add one class and one filter for
one IP address like that:

  ./tc.sh add <user> <ip>

while <user> is `admin` or `client`.
 

Lets assume you have configured the following in openvpn:

Admins using the following IP addresses:

10.8.0.2
10.8.0.3

Clients using the following IP addresses

10.8.0.7
10.8.0.8
10.8.0.9


First start TC:

  ./tc.sh start

Then add all the classes and filters for the ip addresses:

  ./tc.sh add admin 10.8.0.2
  ./tc.sh add admin 10.8.0.3
  ./tc.sh add client 10.8.0.7
  ./tc.sh add client 10.8.0.8
  ./tc.sh add client 10.8.0.9

Should you later need to change something, just start from
scratch by:

  ./tc.sh stop
  ./tc.sh start
  ./tc.sh add admin 10.8.0.2
  ...

@rda0
Copy link
Copy Markdown
Author

rda0 commented Feb 8, 2019

TC's traffic
shaping works using a tree structure of classes. Use the 2nd command
from below:

tc -s qdisc show dev eth0
tc class show dev eth0
tc -p filter show dev eth0

and have a look at the classids. They are for example:

classid 1:2           # client1 10.8.0.2 1200kbit
classid 1:3           # client2 10.8.0.3 1200kbit
classid 1:402         # admin1 10.8.4.2  10mbit

This would be the result of:

./tc.sh stop
./tc.sh start
./tc.sh add admin 10.8.4.2
./tc.sh add client 10.8.0.2
./tc.sh add client 10.8.0.3

Then you can list the classes as follows (these contain the rate limits
for all the admins and clients):

tc class show dev eth0

class htb 1:2 root prio 0 rate 1200Kbit ceil 1200Kbit burst 1599b cburst
1599b
class htb 1:402 root prio 0 rate 10Mbit ceil 10Mbit burst 1600b cburst
1600b
class htb 1:3 root prio 0 rate 1200Kbit ceil 1200Kbit burst 1599b cburst
1599b

Or the filters (these match the traffic according to ip address and
assign them to the classes shown above, here it is just called flowid):

tc -p filter show dev eth0

filter parent 1: protocol ip pref 1 u32
filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256
filter parent 1: protocol ip pref 1 u32 fh 2:2 key ht 2 bkt 2 flowid 1:2
  match IP dst 10.8.0.2/32
filter parent 1: protocol ip pref 1 u32 fh 2:2:4 order 4 key ht 2 bkt 2
flowid 1:402
  match IP dst 10.8.4.2/32
filter parent 1: protocol ip pref 1 u32 fh 2:3 key ht 2 bkt 3 flowid 1:3
  match IP dst 10.8.0.3/32
filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht
800 bkt 0 link 2:
  match IP dst 10.8.0.0/16
    hash mask 000000ff at 16

The classes is where the traffic shaping magic happens. Each class has a
max rate limit and a small bucket for bursts. A class is responsible for
shaping one flow, this can be traffic from a single or multiple sources.
In our case each class shapes traffic from one IP address and all the
classes are all on the same level in the tree, so they are all treated
equally:

                     1:   root qdisc
                      |
                     1:1    child class
                   /  |  \
                  /   |   \
                 /    |    \
                 /    |    \
              1:10  1:11  1:12   leaf classes

This should make it clear, we need one class for each ip address,
because each ip address should have its own rate limit.

However, this does not mean you have to type 5000 commands to create
these rules. Just use a shell for loop.

First make sure the prips command is installed, with prips you can
generate ips from a range:

apt install prips

Example (just print ips to the screen):

prips 10.8.1.2 10.8.1.5
10.8.1.2
10.8.1.3
10.8.1.4
10.8.1.5

Now you can generate all the tc classes using 2 commands, for example:

for ip in $(prips 10.8.1.2 10.8.1.5); do ./tc.sh add admin $ip; done
for ip in $(prips 10.8.2.2 10.8.3.253); do ./tc.sh add client $ip; done


I would recommend to read the LARTC guide (chapter 9):
https://lartc.org/howto/lartc.qdisc.html

If you have a look at 9.5. Classful Queueing Disciplines:
https://lartc.org/howto/lartc.qdisc.classful.html
you can see that you could do more advanced tree structures than just a
flat hirarchy. For example:

                     1:   root qdisc
                      |
                     1:1    child class
                   /  |  \
                  /   |   \
                 /    |    \
                 /    |    \
              1:10  1:11  1:12   child classes
               |      |     |
               |     11:    |    leaf class
               |            |
               10:         12:   qdisc
              /   \       /   \
           10:1  10:2   12:1  12:2   leaf classes

So if your interface `eth0` has 1gbit available you could define

child classes:

1:10  is for all admins   limited at 950mbit guaranteed at 600mbit
1:12  is for all clients  limited at 950mbit guaranteed at 450mbit

leaf classes:

10:1  admin1   limited at 950mbit guaranteed at 10mbit
10:2  admin2   limited at 950mbit guaranteed at 10mbit
12:1  client1  limited at 200mbit guaranteed at 1200kbit
12:2  client2  limited at 200mbit guaranteed at 1200kbit

and so on...

If only 2 admins are using traffic, they will be able to use the full
available transfer rate. The rate is equally distributed should both try
to saturate the link.

Should any clients start to use traffic, the common limit of the admins
would be reduced down to 600 mbit, depending on how much traffic is used
by all clients.


There are also other projects like FireQOS or tcng, that try to simplify
all that complicated tc stuff using simple configuration files:
https://firehol.org/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment