From https://github.com/google/pprof#building-pprof go get -u github.com/google/pprof
See “building envoy with bazel”
Envoy’s static build is set up for profiling and can be build with:
bazel build //source/exe:envoy-static
More context: https://github.com/envoyproxy/envoy/blob/master/bazel/PPROF.md
https://github.com/envoyproxy/nighthawk
bazel build -c opt //:nighthawk
There’s also some recommendations in Nighthawk’s README.md for improving accuracy and repeatability of its measurements.
The important part is that the admin interface needs to be set up to allow one to enable / disable Profiling via http, as well as specify a location to dump the profiling data.
admin:
access_log_path: /tmp/admin_access.log
profile_path: /tmp/envoy.prof
address:
socket_address: { address: $server_ip, port_value: 0 }
Static_resources:
.. your configuration ..
taskset /path/to/envoy-repo/bazel-bin/envoy-static --config-path /path/to/envoy-config.yaml
Enable cpu profiling through Envoy’s admin interface
curl -X POST http://your-envoy-instance:admin-port/cpuprofiler?enable=y
https://www.envoyproxy.io/docs/envoy/latest/operations/admin#post--cpuprofiler
Note: there’s also envoyproxy/nighthawk#160 which is an invitation for discussing if it would make sense for NH to facilitate consolidation of tests scenarios in its repo.
Run your test. For example Nighthawk.
taskset /path/to/nighthawk-repo/bazel-bin/nighthawk_client --concurrency 5 --rps 10000 --duration 30 http://envoy-cluster-host:envoy-cluster-port
Run pprof web UI
pprof -http=localhost:8888 /tmp/envoy.prof
Gives you various means to help with analysing the collected profile, including a flame-chart. Sample (on temporary VM, visualizing a profile drawn from NH’s integration tests): http://34.90.107.89/ui/