π§ CAUTION: Work in Progress
This is a draft document that has not been thoroughly tested or fact checked. Information may be incomplete or inaccurate.
DPDK and NGINX Plus Compatibility: Making High-Performance Networking Work with Existing Applications
Q: Are DPDK drivers and non-DPDK drivers on Linux both compatible with the same user space applications?
A: By default, DPDK drivers and traditional Linux kernel network drivers use fundamentally different architectures and are not directly compatible with the same unmodified user space applications. However, with certain NIC hardware features and proper configuration, they can coexist and provide compatibility, allowing applications like NGINX Plus to benefit from DPDK performance without requiring code modifications.
A: The key differences are:
Standard Linux Kernel Drivers | DPDK Drivers |
---|---|
Process packets through kernel networking stack | Bypass the kernel entirely |
Use interrupts for packet notifications | Use polling mode drivers (PMDs) |
Applications use standard socket APIs | Applications must use DPDK APIs directly |
Network managed through standard Linux tools | Require DPDK-specific management tools |
Share CPU resources with other processes | Typically require dedicated CPU cores |
Use standard memory allocation | Use hugepages for better memory performance |
A: There are three primary approaches:
- Flow Bifurcation: Using hardware packet classification filtering to direct certain traffic through DPDK and other traffic through the kernel
- Bifurcated Drivers: Special NIC drivers that allow both kernel and DPDK operation simultaneously
- Kernel NIC Interface (KNI): A DPDK component that provides an interface between DPDK applications and the kernel network stack
For an existing application like NGINX Plus, flow bifurcation with bifurcated drivers is the most seamless approach because it doesn't require application modifications.
A: Flow bifurcation is a hardware capability in modern NICs that allows the simultaneous use of both kernel and DPDK drivers by splitting traffic based on configurable rules.
With flow bifurcation:
- The NIC hardware itself decides which packets go to which processing path
- Some packets can be processed by DPDK (bypassing the kernel)
- Other packets can be processed by the standard Linux networking stack
- Applications like NGINX Plus can continue using standard socket APIs
This creates a transparent layer that maintains compatibility while allowing performance improvements.
A: The architecture looks like this:
ββββββββββββββββββ ββββββββββββββββββββββ
β NGINX Plus β β DPDK Application β
β (unmodified) β β (if present) β
βββββββββ¬βββββββββ ββββββββββββ¬ββββββββββ
β β
βββββββββΌβββββββββββββββββββββββββββΌβββββββββββ
β Socket API DPDK API β
ββββββββββββββββββββββββββ¬βββββββββββ¬ββββββββββ€
β Linux TCP/IP Stack β β β
ββββββββββββββββββββββββββ β β
β Linux Kernel Driver β β
β (Standard Path) β β
βββββββββ¬βββββββββββββββββββββββββββββΌββββββββββ
β β
βββββββββΌβββββββββββββββββββββββββββΌββββββββββ
β β
β NIC Hardware with β
β Flow Classification Support β
β β
ββββββββββββββββββββββββββββββββββββββββββββββ
β²
β
Network Traffic
In this setup:
- The NIC hardware receives all traffic
- Based on configured flow rules, it directs packets to either:
- The standard Linux kernel path (for NGINX Plus)
- The DPDK fast path (for DPDK-aware applications, if present)
- NGINX Plus continues to operate normally, using the standard socket API
- Standard Linux tools like ethtool, ifconfig, and ip still work for configuration
A: Several modern NICs support this capability:
- Mellanox/NVIDIA ConnectX: Natively supports bifurcated operation
- Intel 82599 10GbE: Via SR-IOV and Flow Director technologies
- Intel X710/XL710: Using SR-IOV, Cloud Filter, and L3 VEB switch
- Other vendor NICs: Check vendor documentation for flow bifurcation support
A: The configuration process typically involves:
-
Install DPDK libraries and tools:
apt-get install dpdk dpdk-dev # For Debian/Ubuntu # or yum install dpdk dpdk-devel # For Red Hat/CentOS
-
Configure the NIC for bifurcated mode (varies by NIC):
For Intel 82599 NICs:
# Load kernel driver with SR-IOV support modprobe ixgbe max_vfs=4 # Enable Flow Director ethtool -K eth1 ntuple on # Configure flow rules (example for HTTP traffic to kernel) ethtool -N eth1 flow-type tcp4 dst-port 80 action 0 ethtool -N eth1 flow-type tcp4 dst-port 443 action 0 # Configure other traffic to DPDK VF if needed ethtool -N eth1 flow-type udp4 src-ip 192.0.2.2 dst-ip 198.51.100.2 \ action $queue_index_in_VF0
For Mellanox/NVIDIA NICs:
# No special configuration needed as they natively support bifurcation # Just configure flow rules using mlx5 tools or DPDK flow API
-
Run NGINX Plus normally - it will use the standard kernel path:
systemctl start nginx # or nginx
-
For DPDK applications (if you want to run them alongside):
# Use available DPDK tools to start DPDK applications # These will use the DPDK-controlled paths
A: Not automatically to the same degree as a DPDK-native application. The benefits depend on:
- Hardware offloading capabilities of your NIC
- Flow rule configuration to optimize which paths traffic takes
- System tuning for both kernel and DPDK components
NGINX Plus will still use the kernel networking stack, but with properly configured flow bifurcation, you can:
- Reduce interrupt overhead for certain traffic patterns
- Benefit from hardware optimizations in the NIC
- Potentially improve packet processing performance
For maximum performance, consider DPDK-specific NGINX forks like:
- F-Stack with NGINX
- DPDK-NGINX
- Official NGINX Plus with DPDK integration (available in some versions)
A: Yes, there are trade-offs:
- Complexity: Configuration and maintenance becomes more complex
- Resource usage: DPDK typically requires dedicated CPU cores and memory
- Partial benefits: Unmodified applications won't gain full DPDK performance
- Hardware limitations: Not all NICs support the necessary features
- Monitoring challenges: Standard Linux tools may not show complete traffic visibility
Q: What's the performance difference between unmodified NGINX Plus with flow bifurcation vs. a DPDK-native version?
A: Benchmark results vary by workload, but generally:
- Unmodified NGINX Plus with flow bifurcation: Might see 10-30% performance improvements over standard kernel networking
- DPDK-native NGINX: Can achieve 2-5x performance improvements for specific workloads
The actual differences depend on:
- Traffic patterns
- Configuration
- Hardware capabilities
- Load characteristics
A: Yes, a common migration path is:
- Start with flow bifurcation: Get the basic infrastructure in place
- Measure and benchmark: Identify performance gaps
- Gradual transition: Move specific workloads to DPDK-aware solutions
- Consider NGINX Plus with DPDK: Deploy official DPDK-enabled versions if available
- Full DPDK migration: Eventually move completely to DPDK-native solutions if needed
This approach allows for a controlled migration with minimal service disruption.
A: Yes, several alternatives exist:
- XDP (eXpress Data Path): Kernel-integrated, programmable packet processing
- AF_XDP: Socket interface for XDP, with userspace access
- Netmap: Framework for high-speed packet I/O
- OpenOnload: User-level network stack
- VPP (Vector Packet Processing): High-performance packet processing framework
Each has different trade-offs regarding performance, compatibility, and ease of use.
While DPDK drivers and standard Linux networking were not originally designed to work together with the same applications, modern hardware and software developments have made it possible to gain some DPDK benefits without rewriting applications like NGINX Plus. Flow bifurcation with compatible NICs offers a middle ground that maintains compatibility while improving performance.
For maximum performance, however, a gradual migration to DPDK-native solutions would eventually be necessary. The good news is that this migration can happen incrementally, allowing organizations to balance performance needs with development resources.