This Python script is designed to compare bandwidth data rates from different sources and output a uniform unit format.
❯ python3 network-rate-convert.py 400MB/s
3200.00 Mb/s
Even supports multiple input args:
➜ python3 network-rate-convert.py 272MB/s 16.3GB/min
2176.00 Mb/s
2173.33 Mb/s
I wanted to compare the bandwidth utilization of some EC2 instances, as reported by node-exporter and visualized in Grafana, with the metrics from AWS CloudWatch.
However, AWS often presents network data in formats that can be unclear or difficult to directly compare with metrics from other monitoring tools. It reports the network cumulative output (in GB) over a 60-second interval.
OTOH, monitoring exporters like node-exporter
have the ability to scrape data multiple times in a minute and plot an average per second throughput graph. This ambiguity can lead to confusion when trying to correlate AWS network data with metrics obtained from systems like Prometheus using PromQL.
In addition to above confusion, AWS itself uses non consistent formats for certain things. For eg, the EC2 bandwidth baseline limits are calculated with Gbps
but Cloudwatch metrics reports GBps
.
I created this small script to save me doing all this mental napkin maths while debugging more critical issues!
usage: network-rate-convert.py [-h] [--output_unit {b,B,Kb,KB,Mb,MB,Gb,GB,Tb,TB}] rates [rates ...]
Compare network rates.
positional arguments:
rates List of rates to compare (e.g., '400MB/min 3Gb/s').
options:
-h, --help show this help message and exit
--output_unit {b,B,Kb,KB,Mb,MB,Gb,GB,Tb,TB}
Output unit for displaying rates (default: 'Mb').