-
-
Save bradoaks/940616 to your computer and use it in GitHub Desktop.
HFSC - linux traffic shaping's best kept secret
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# As the "bufferbloat" folks have recently re-discovered and/or more widely | |
# publicized, congestion avoidance algorithms (such as those found in TCP) do | |
# a great job of allowing network endpoints to negotiate transfer rates that | |
# maximize a link's bandwidth usage without unduly penalizing any particular | |
# stream. This allows bulk transfer streams to use the maximum available | |
# bandwidth without affecting the latency of non-bulk (e.g. interactive) | |
# streams. | |
# In other words, TCP lets you have your cake and eat it too -- both fast | |
# downloads and low latency all at the same time. | |
# However, this only works if TCP's afore-mentioned congestion avoidance | |
# algorithms actually kick in. The most reliable method of signaling | |
# congestion is to drop packets. (There are other ways, such as ECN, but | |
# unfortunately they're still not in wide use.) | |
# Dropping packets to make the network work better is kinda counter-intuitive. | |
# But, that's how TCP works. And if you take advantage of that, you can make | |
# TCP work great. | |
# Dropping packets gets TCP's attention and fast. The sending endpoint | |
# throttles back to avoid further network congestion. In other words, your | |
# fast download slows down. Then, as long as there's no further congestion, | |
# the sending endpoint gradually increases the transfer rate. Then the cycle | |
# repeats. It can get a lot more complex than that simple explanation, but the | |
# main point is: dropping packets when there's congestion is good. | |
# Traffic shaping is all about slowing down and/or dropping (or ECN marking) | |
# packets. The thing is, it's much better for latency to simply drop packets | |
# than it is to slow them down. Linux has a couple of traffic shapers that | |
# aren't afraid to drop packets. One of the most well-known is TBF, the Token | |
# Bucket Filter. Normally it slows down packets to a specific rate. But, it | |
# also accepts a "limit" option to specify the maximum number of packets to | |
# queue. When the limit is exceeded, packets are dropped. | |
# TBF's simple "tail-drop" algorithm is actually one of the worst kinds of | |
# "active queue management" (AQM) that you can do. But even still, it can make | |
# a huge difference. Applying TBF alone (with a short enough limit) can make a | |
# maddeningly high-latency link usable again in short order. | |
# TBF's big disadvantage is that it's a "classless" shaper. That means you | |
# can't prioritize one TCP stream over another. That's where HTB, the | |
# Hierarchical Token Bucket, comes in. HTB uses the same general algorithm as | |
# TBF while also allowing you to filter specific traffic to prioritized queues. | |
# But HTB has a big weakness: it doesn't have a good, easy way of specifying a | |
# queue limit like TBF does. That means, compared to TBF, HTB is much more | |
# inclined to slow packets rather than to drop them. That hurts latency, bad. | |
# So now we come to Linux traffic shaping's best kept secret: the HFSC shaper. | |
# HFSC stands for Hierarchical Fair Service Curve. The linux implementation is | |
# a complex beast, enough so to have a 9 part question about it on serverfault | |
# ( http://serverfault.com/questions/105014/does-anyone-really-understand-how-hfsc-scheduling-in-linux-bsd-works ). | |
# Nonetheless, HFSC can be understood in a simplified way as HTB with limits. | |
# HFSC allows you to classify traffic (like HTB, unlike TBF), but it also has | |
# no fear of dropping packets (unlike HTB, like TBF). | |
# HFSC does a great job of keeping latency low. With it, it's possible to fully | |
# saturate a link while maintaining perfect non-bulk session interactivity. | |
# It is the holy grail of traffic shaping, and it's in the stock kernel. | |
# To get the best results, HFSC should be combined with SFQ (Stochastic | |
# Fairness Queueing) and optionally an ingress filter. If all three are used, | |
# it's possible to maintain low-latency interactive sessions even without any | |
# traffic prioritization. Further adding prioritization then maximizes | |
# interactivity. | |
# Here's how it's done: | |
# set this to your internet-facing network interface: | |
WAN_INTERFACE=eth0 | |
# set this to your local network interface: | |
LAN_INTERFACE=eth1 | |
# how fast is your downlink? | |
MAX_DOWNRATE=3072kbit | |
# how close should we get to max down? e.g. 90% | |
USE_DOWNPERCENT=0.90 | |
# how fast is your uplink? | |
MAX_UPRATE=384kbit | |
# how close should we get to max up? e.g. 80% | |
USE_UPPERCENT=0.80 | |
# what port do you want to prioritize? e.g. for ssh, use 22 | |
INTERACTIVE_PORT=22 | |
## now for the magic | |
# remove any existing qdiscs | |
/sbin/tc qdisc del dev $WAN_INTERFACE root 2> /dev/null | |
/sbin/tc qdisc del dev $WAN_INTERFACE ingress 2> /dev/null | |
/sbin/tc qdisc del dev $LAN_INTERFACE root 2> /dev/null | |
/sbin/tc qdisc del dev $LAN_INTERFACE ingress 2> /dev/null | |
# computations | |
MAX_UPNUM=`echo $MAX_UPRATE | sed 's/[^0-9]//g'` | |
MAX_UPBASE=`echo $MAX_UPRATE | sed 's/[0-9]//g'` | |
MAX_DOWNNUM=`echo $MAX_DOWNRATE | sed 's/[^0-9]//g'` | |
MAX_DOWNBASE=`echo $MAX_DOWNRATE | sed 's/[0-9]//g'` | |
NEAR_MAX_UPNUM=`echo "$MAX_UPNUM * $USE_UPPERCENT" | bc | xargs printf "%.0f"` | |
NEAR_MAX_UPRATE="${NEAR_MAX_UPNUM}${MAX_UPBASE}" | |
NEAR_MAX_DOWNNUM=`echo "$MAX_DOWNNUM * $USE_DOWNPERCENT" | bc | xargs printf "%.0f"` | |
NEAR_MAX_DOWNRATE="${NEAR_MAX_DOWNNUM}${MAX_DOWNBASE}" | |
HALF_MAXUPNUM=$(( $MAX_UPNUM / 2 )) | |
HALF_MAXUP="${HALF_MAXUPNUM}${MAX_UPBASE}" | |
HALF_MAXDOWNNUM=$(( $MAX_DOWNNUM / 2 )) | |
HALF_MAXDOWN="${HALF_MAXDOWNNUM}${MAX_DOWNBASE}" | |
# install HFSC under WAN to limit upload | |
/sbin/tc qdisc add dev $WAN_INTERFACE root handle 1: hfsc default 11 | |
/sbin/tc class add dev $WAN_INTERFACE parent 1: classid 1:1 hfsc sc rate $NEAR_MAX_UPRATE ul rate $NEAR_MAX_UPRATE | |
/sbin/tc class add dev $WAN_INTERFACE parent 1:1 classid 1:10 hfsc sc umax 1540 dmax 5ms rate $HALF_MAXUP ul rate $NEAR_MAX_UPRATE | |
/sbin/tc class add dev $WAN_INTERFACE parent 1:1 classid 1:11 hfsc sc umax 1540 dmax 5ms rate $HALF_MAXUP ul rate $HALF_MAXUP | |
# prioritize interactive ports | |
/sbin/tc filter add dev $WAN_INTERFACE protocol ip parent 1:0 prio 1 u32 match ip sport $INTERACTIVE_PORT 0xffff flowid 1:10 | |
/sbin/tc filter add dev $WAN_INTERFACE protocol ip parent 1:0 prio 1 u32 match ip dport $INTERACTIVE_PORT 0xffff flowid 1:10 | |
# add SFQ | |
/sbin/tc qdisc add dev $WAN_INTERFACE parent 1:10 handle 30: sfq perturb 10 | |
/sbin/tc qdisc add dev $WAN_INTERFACE parent 1:11 handle 40: sfq perturb 10 | |
# install ingress filter to limit download to 97% max | |
MAX_DOWNRATE_INGRESSNUM=`echo "$MAX_DOWNNUM * 0.97" | bc | xargs printf "%.0f"` | |
MAX_DOWNRATE_INGRESS="${MAX_DOWNRATE_INGRESSNUM}${MAX_DOWNBASE}" | |
/sbin/tc qdisc add dev $WAN_INTERFACE handle ffff: ingress | |
/sbin/tc filter add dev $WAN_INTERFACE parent ffff: protocol ip prio 1 u32 match ip sport $INTERACTIVE_PORT 0xffff flowid :1 | |
/sbin/tc filter add dev $WAN_INTERFACE parent ffff: protocol ip prio 1 u32 match ip dport $INTERACTIVE_PORT 0xffff flowid :1 | |
/sbin/tc filter add dev $WAN_INTERFACE parent ffff: protocol ip prio 50 u32 match ip src 0.0.0.0/0 police rate $MAX_DOWNRATE_INGRESS burst 20k drop flowid :2 | |
# install HFSC under LAN to limit download | |
/sbin/tc qdisc add dev $LAN_INTERFACE root handle 1: hfsc default 11 | |
/sbin/tc class add dev $LAN_INTERFACE parent 1: classid 1:1 hfsc sc rate 1000mbit ul rate 1000mbit | |
/sbin/tc class add dev $LAN_INTERFACE parent 1:1 classid 1:10 hfsc sc umax 1540 dmax 5ms rate 900mbit ul rate 900mbit | |
/sbin/tc class add dev $LAN_INTERFACE parent 1:1 classid 1:11 hfsc sc umax 1540 dmax 5ms rate $HALF_MAXDOWN ul rate $NEAR_MAX_DOWNRATE | |
# prioritize interactive ports | |
/sbin/tc filter add dev $LAN_INTERFACE protocol ip parent 1:0 prio 1 u32 match ip sport $INTERACTIVE_PORT 0xffff flowid 1:10 | |
/sbin/tc filter add dev $LAN_INTERFACE protocol ip parent 1:0 prio 1 u32 match ip dport $INTERACTIVE_PORT 0xffff flowid 1:10 | |
# add SFQ | |
/sbin/tc qdisc add dev $LAN_INTERFACE parent 1:10 handle 30: sfq perturb 10 | |
/sbin/tc qdisc add dev $LAN_INTERFACE parent 1:11 handle 40: sfq perturb 10 |
I'm glad this has helped people over the years!
However, this script is now obsolete. Use tc_cake instead. It really is amazing, and the bufferbloat people deserve an enormous amount of kudos for creating it and getting it into mainline linux. Thanks also to the Red Hat linux company for employing some of those people.
Cake is dead simple to use:
WAN=eth1
LAN=eth0
/usr/sbin/tc qdisc del dev $WAN root 2> /dev/null
/usr/sbin/tc qdisc del dev $LAN root 2> /dev/null
BANDWIDTH=6mbit
UPBANDWIDTH=3mbit
/usr/sbin/tc qdisc add dev $WAN handle 1: root cake besteffort bandwidth $UPBANDWIDTH internet nat egress ack-filter dual-srchost ethernet
/usr/sbin/tc qdisc add dev $LAN handle 1: root cake besteffort bandwidth $BANDWIDTH internet ingress dual-dsthost ethernet
@eqhmcow Can you help me out with a full script that does the following?
- Ensure that the bandwidth is shared equally across devices in a network
- Eliminate/reduce bufferbloat.
I'm using a Debian machine as my router for a DSL connection at home.
Here is the script that I copied from the internet that does the first point.
#!/bin/bash
##
# Dan Siemon <dan@coverfire.com>
# http://www.coverfire.com
#
# License: Affero GPLv3
#
# This script is designed to be used on router placed before a bottleneck link.
# Set the rate to be slightly below the bottleneck rate so that the router
# owns the queue. That is, there is no queueing in the DSL or cable device.
#
# This script attempts to create per-host fairness on the network
# and for each host three priority classes. Per-host fairness is created
# by having NUM_HOST_BUCKETS classes and hashing hosts across them. Set
# NUM_HOST_BUCKETS to something sane for your network.
#
# Experimental results can be found at:
# https://www.coverfire.com/archives/2013/01/01/improving-my-home-internet-performance/
#
# The hierarchy looks like:
#
# ASCII:
#
# Interface
# |
# HTB 1:1
# / \
# Host Bucket 1 .. NUM_HOST_BUCKETS [Classes 1:10-1:(10+NUM_HOST_BUCKETS)]
# |
# DRR
# / | \
# High Normal Low [DRR: With three classes]
# |
# Leaf QDisc [Choose the type of the leaf QDisc below]
#
# The tree is created and the QDiscs are named in depth first order.
#
# TODO
# - Add IPv6 support. Should just require additional filters. Note that the flow filter
# automatically reaches through some tunnels like IP-IPv6 so if you are using IPv6 via
# a tunnel this script should already have the correct behavior.
#
######################
# Config
######################
/usr/bin/logger --tag qos --id "Applying QOS on PPP connection..."
TC="/sbin/tc"
#TC=`which tc`
#_DEBUG="on"
#_CDEBUG="on"
DEVICE="ppp0"
# The number of host buckets. All hosts are hashed into one of these buckets
# so you'll want this to approximate (but probably be lower) the number of hosts
# in your network.
NUM_HOST_BUCKETS=8
# The number of flow buckets within each high, normal and low class.
# If SFQ, SFB or FQ_CODEL are used this value is not used as these QDiscs
# have many embedded queues.
NUM_FLOW_BUCKETS=32
####
# Bandwidth rates
####
# All rates are kbit/sec.
# RATE should be set to just under your link rate.
RATE="32999"
####
# Queue size
####
# Size the queue. Only used with the simple FIFO QDiscs
# ie not SFQ, FQ_CODEL. Fun for experimentation but you
# probably don't want to use these simple QDiscs.
FIFO_LEN=100
####
# How often to perturb the hashes.
####
# This should probably be on the order of minutes so as to avoid the packet
# reordering which can happen when the flows are redistributed
# into different queues. Some of the new QDiscs may handle reordering properly.
#PERTURB=5
PERTURB=300
####
# Packet overhead
####
# Examples:
# ADSL:
# - http://www.adsl-optimizer.dk/thesis/
# (http://web.archive.org/web/20090422131547/http://www.adsl-optimizer.dk/thesis/)
# - If you are using ADSL you probably want LINKLAYER="atm" too.
# VDSL2 (without ATM) w/ PPPoE:
# - 40 bytes for 802.3
# - 8 bytes for PPPoE
OVERHEAD=48
####
# Set linklayer to one of ethernet,adsl (adsl == atm).
####
#LINKLAYER="adsl"
LINKLAYER="ethernet"
####
# The MTU of the underlying interface.
####
MTU="1492"
####
# The keys that are used to identify individual flows.
####
# For 5-tuple (flow) fairness
#FLOW_KEYS="src,dst,proto,proto-src,proto-dst"
# For 5-tuple (flow) fairness when the same device is performing NAT
FLOW_KEYS="nfct-src,nfct-dst,nfct-proto,nfct-proto-src,nfct-proto-dst"
####
# The keys that are used to identify a host's traffic.
####
# No NAT
#HOST_KEYS="src"
# With local device doing NAT
HOST_KEYS="nfct-src"
# Set R2Q (HTB knob) low if you use low bitrates. You may see warning from the kernel
# in /var/log/messages indicating this value should be modified. If you set the
# MTU/QUANTUM changing this isn't required.
#R2Q=2
####
# Choose the type of queue for each of the three per host priority classes
# Support options:
# drr
# sfq
# fq_codel
# sfb
# pfifo_head_drop
# pfifo
####
HIGH_PRIORITY_QDISC_TYPE="fq_codel"
NORMAL_PRIORITY_QDISC_TYPE="fq_codel"
LOW_PRIORITY_QDISC_TYPE="fq_codel"
###########################################
###########################################
# Other than picking QDisc type there is nothing to change below here.
###########################################
###########################################
######################
# Expand the config variables to tc arguments if they are defined.
######################
if [ "${OVERHEAD}" != "" ]; then
OVERHEAD="overhead ${OVERHEAD}"
fi
if [ "${LINKLAYER}" != "" ]; then
LINKLAYER="linklayer ${LINKLAYER}"
fi
if [ "${R2Q}" != "" ]; then
R2Q="r2q ${R2Q}"
fi
if [ "${PERTURB}" != "" ]; then
PERTURB="perturb ${PERTURB}"
fi
QUANTUM=${MTU}
if [ "${QUANTUM}" != "" ]; then
QUANTUM="quantum ${QUANTUM}"
fi
######################
# Utility functions
######################
function DEBUG()
{
[ "$_DEBUG" == "on" ] && "$@"
}
# Debug function for printing the tc command lines.
function CDEBUG()
{
[ "$_CDEBUG" == "on" ] && "$@"
}
function hex_replace {
if [[ "$1" =~ ":" ]]; then
QDISC=${1%%:*}
CLASS=${1##*:}
if [ "${CLASS}" == "" ]; then
D2H=`printf "%x:" ${QDISC}`
else
D2H=`printf "%x:%x" ${QDISC} ${CLASS}`
fi
else
D2H=`printf "%x" $1`
fi
}
###
# Function to wrap the tc command and convert the QDisc and class
# identifiers to hex before calling tc.
###
function tc_h {
OUTPUT="${TC} "
PTMP=$@
CDEBUG printf "Command before: %s\n" "${PTMP}"
while [ "$1" != "" ]; do
case "$1" in
# The tc parameters which take major:minor as an argument
"classid" | "flowid" | "parent" | "baseclass" | "handle")
hex_replace $2
OUTPUT="${OUTPUT} $1 ${D2H} "
shift
;;
* )
OUTPUT="${OUTPUT} $1 "
esac
shift
done
CDEBUG printf "Command after: ${OUTPUT}\n"
# Run the command.
${OUTPUT}
}
function get_next_free_major {
if [ "${FREE_MAJOR}" == "" ]; then
FREE_MAJOR=2 # Assumes 1 is used.
return
fi
FREE_MAJOR=$(expr ${FREE_MAJOR} + 1)
}
######################
# Functions to create QDiscs at the leaves.
######################
function drr {
PARENT=$1
HANDLE=$2
# Create the QDisc.
tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} drr
# Create NUM_FLOW_BUCKETS classes and add a pfifo_head_drop to each.
for J in `seq ${NUM_FLOW_BUCKETS}`; do
tc_h class add dev ${DEVICE} parent ${HANDLE} classid ${HANDLE}:${J} drr ${QUANTUM}
tc_h qdisc add dev ${DEVICE} parent ${HANDLE}:${J} pfifo_head_drop limit ${FIFO_LEN}
done
# Add a filter to direct the packets.
tc_h filter add dev ${DEVICE} prio 1 protocol ip parent ${HANDLE}: handle 1 flow hash keys ${FLOW_KEYS} divisor ${NUM_FLOW_BUCKETS} ${PERTURB} baseclass ${HANDLE}:1
}
function sfq {
PARENT=$1
HANDLE=$2
DEBUG printf "\t\t\tsfq parent %s handle %s\n" ${PARENT} ${HANDLE}
#tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} sfq limit ${FIFO_LEN} ${QUANTUM} divisor 1024
tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} sfq ${QUANTUM} divisor 1024
# Don't use the SFQ default classifier.
tc_h filter add dev ${DEVICE} prio 1 protocol ip parent ${HANDLE}: handle 1 flow hash keys ${FLOW_KEYS} divisor 1024 ${PERTURB} baseclass ${HANDLE}:1
}
function fq_codel {
PARENT=$1
HANDLE=$2
DEBUG printf "\t\t\tfq_codel parent %s handle %s\n" ${PARENT} ${HANDLE}
tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} fq_codel ${QUANTUM} flows 4096
# Don't use the default classifier.
tc_h filter add dev ${DEVICE} prio 1 protocol ip parent ${HANDLE}: handle 1 flow hash keys ${FLOW_KEYS} divisor 4096 ${PERTURB} baseclass ${HANDLE}:1
}
function sfb {
PARENT=$1
HANDLE=$2
#tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} sfb
tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} sfb target 20 max 25 increment 0.005 decrement 0.0001
# TODO - Should this have divisor?
tc_h filter add dev ${DEVICE} prio 1 protocol ip parent ${HANDLE}: handle 1 flow hash keys ${FLOW_KEYS} divisor 1024 ${PERTURB}
}
function pfifo_head_drop {
PARENT=$1
HANDLE=$2
tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} pfifo_head_drop limit ${FIFO_LEN}
}
function pfifo {
PARENT=$1
HANDLE=$2
tc_h qdisc add dev ${DEVICE} parent ${PARENT} handle ${HANDLE} pfifo limit ${FIFO_LEN}
}
function priority_class_qdisc {
PARENT=$2
HANDLE=$3
case "$1" in
"drr" )
drr ${PARENT} ${HANDLE}
;;
"sfq" )
sfq ${PARENT} ${HANDLE}
;;
"fq_codel" )
fq_codel ${PARENT} ${HANDLE}
;;
"sfb" )
sfb ${PARENT} ${HANDLE}
;;
"pfifo_head_drop" )
pfifo_head_drop ${PARENT} ${HANDLE}
;;
"pfifo" )
pfifo ${PARENT} ${HANDLE}
;;
* )
echo "Error: Unknown leaf QDisc type"
exit
;;
esac
}
######################
# The real work starts here.
######################
# Calculate the divided rate value for use later.
DIV_RATE=`expr ${RATE} / ${NUM_HOST_BUCKETS}`
echo "Number of host buckets: ${NUM_HOST_BUCKETS}"
echo "Rate per host (DIV_RATE):" ${DIV_RATE}
# Delete any existing QDiscs if they exist.
tc_h qdisc del dev ${DEVICE} root
# HTB QDisc at the root. Default all traffic into the prio qdisc.
tc_h qdisc add dev ${DEVICE} root handle 1: htb ${R2Q}
# Create a top level class with the max rate.
tc_h class add dev ${DEVICE} parent 1: classid 1:1 htb rate ${RATE}kbit ${QUANTUM} prio 0 ${LINKLAYER} ${OVERHEAD}
######
# Create NUM_HOST_BUCKETS classes within the top-level class.
# Within each of these create a DRR with three classes which implement the three priorities.
# Within each priority class create the configured leaf QDisc.
######
for HOST_NUM in `seq ${NUM_HOST_BUCKETS}`; do
DEBUG printf "Create host class: %i\n" $HOST_NUM
QID=`expr ${HOST_NUM} '+' 9` # 1+9=10 - Start host buckets at 10. Arbitrary.
DEBUG printf "\tQID: %i\n" ${QID}
tc_h class add dev ${DEVICE} parent 1:1 classid 1:${QID} htb rate ${DIV_RATE}kbit ceil ${RATE}kbit ${QUANTUM} prio 0 ${LINKLAYER} ${OVERHEAD}
######
# Within each host class create a DRR QDisc within which we'll create the
# high, normal and low priority classes.
######
get_next_free_major
SUB_MAJOR=${FREE_MAJOR}
tc_h qdisc add dev ${DEVICE} parent 1:${QID} handle ${SUB_MAJOR}: drr
# Filter from the host class to the DRR within it.
tc_h filter add dev ${DEVICE} prio 2 protocol ip parent 1:${QID} u32 match ip dst 0.0.0.0/0 flowid ${SUB_MAJOR}:0
###
# High priority class
###
DEBUG printf "\t\tHigh: %i\n" ${QID_1}
tc_h class add dev ${DEVICE} parent ${SUB_MAJOR}: classid ${SUB_MAJOR}:1 drr ${QUANTUM}
# Create the leaf QDisc for this priority class.
get_next_free_major
SUB_PRIO_MAJOR=${FREE_MAJOR}
priority_class_qdisc ${HIGH_PRIORITY_QDISC_TYPE} ${SUB_MAJOR}:1 ${SUB_PRIO_MAJOR}
###
# Normal priority class
###
DEBUG printf "\t\tNormal: %i\n" ${QID_2}
tc_h class add dev ${DEVICE} parent ${SUB_MAJOR}: classid ${SUB_MAJOR}:2 drr ${QUANTUM}
# Create the leaf QDisc for this priority class.
get_next_free_major
SUB_PRIO_MAJOR=${FREE_MAJOR}
priority_class_qdisc ${NORMAL_PRIORITY_QDISC_TYPE} ${SUB_MAJOR}:2 ${SUB_PRIO_MAJOR}
###
# Low priority class
###
DEBUG printf "\t\tLow: %i\n" ${QID_3}
tc_h class add dev ${DEVICE} parent ${SUB_MAJOR}: classid ${SUB_MAJOR}:3 drr ${QUANTUM}
# Create the leaf QDisc for this priority class.
get_next_free_major
SUB_PRIO_MAJOR=${FREE_MAJOR}
priority_class_qdisc ${LOW_PRIORITY_QDISC_TYPE} ${SUB_MAJOR}:3 ${SUB_PRIO_MAJOR}
######
# Add filters to classify based on the TOS bits into the high, normal and low priority classes.
# Only mask against the three (used) TOS bits. The final two bits are used for ECN.
# TOS field is XXXDTRXX.
# X= Not part of the TOS field.
# D= Delay bit
# T= Throughput bit
# R= Reliability bit
#
# OpenSSH terminal sets D.
# OpenSSH SCP/SFTP sets T.
# It's easy to configure the Transmission Bittorrent client to set T (settings.json).
# For home VoIP devices I use an iptables rule to set all of their traffic to have D.
#
# The thinking behind the below rules is to use D as an indication of delay sensitive
# and T as an indication of background (big transfer). All other combinations are put into
# default which is effectively a medium priority.
######
DEBUG printf "\t\tCreating filters\n"
# D bit set.
tc_h filter add dev ${DEVICE} parent ${SUB_MAJOR}: protocol ip prio 10 u32 match ip tos 0x10 0x1c flowid ${SUB_MAJOR}:1
# Diffserv expedited forwarding. Put this in the high priority class.
# Some VoIP clients set this (ie Ekiga).
# DSCP=b8
tc_h filter add dev ${DEVICE} parent ${SUB_MAJOR}: protocol ip prio 10 u32 match ip tos 0xb8 0xfc flowid ${SUB_MAJOR}:1
# T bit set.
tc_h filter add dev ${DEVICE} parent ${SUB_MAJOR}: protocol ip prio 10 u32 match ip tos 0x08 0x1c flowid ${SUB_MAJOR}:3
# Everything else into default.
tc_h filter add dev ${DEVICE} parent ${SUB_MAJOR}: protocol ip prio 10 u32 match ip tos 0x00 0x00 flowid ${SUB_MAJOR}:2
done
# Send everything that hits the top level QDisc to the top class.
tc_h filter add dev ${DEVICE} prio 1 protocol ip parent 1:0 u32 match ip dst 0.0.0.0/0 flowid 1:1
# From the top level class hash into the host classes.
tc_h filter add dev ${DEVICE} prio 1 protocol ip parent 1:1 handle 1 flow hash keys ${HOST_KEYS} divisor ${NUM_HOST_BUCKETS} ${PERTURB} baseclass 1:10
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@eqhmcow @bradoaks Thanks a bunch for these scripts! They pointed me in the direction of HFSC and fq_codel, which are indeed great schedulers.
I updated my SuperShaper-SOHO solution to use HFSC and fq_codel based on your scripts above and some additional sources. I even wrote a blog post about this transition from HTB/SFQ to HFSC/fq_codel.
One of the benefits of my solution based on your solution is that mine only uses the
ul
service curve on the root class, allowing any flow to borrow from each other when the link is not fully saturated. I also don't use thert
service curve, because I couldn't understand the math involved. But just using link-sharing (ls
) is still, in my opinion, much better than using HTB and SFQ. My latency is now controlled much better. Thanks a lot!