TREX DPDK (line-rate stateful) — primer¶
Help Center primer for the TREX DPDK stateful line-rate stress engine. Pairs with the
trex/profile catalog and the trex-pod (DPDK + hugepages). Patent #17 anchors the profile-orchestration patterns.
What it tests¶
For the highest-end NGFW SKUs, vendor-published throughput is quoted at "line rate" — typically 10 / 40 / 100 / 400 GbE. To exercise that, the load generator itself must reach line rate, and that means userspace packet processing via DPDK.
TREX (Cisco's open-source TRex traffic generator) gives us:
- 30 Mpps / core sustained
- 40M concurrent flows per generator
- Stateful TCP / UDP / IPSec / HTTP at line rate (not just pps blast)
- 10 named profiles matching common procurement scenarios (HTTP/1.1, HTTP/2, HTTPS, IPSec, mixed enterprise, mixed carrier, etc.)
This is not browser-realistic — TREX cannot decrypt server responses or run JavaScript. It is the right tool when the procurement question is "can the DUT push N Gbps at vendor-claimed flow count".
Three-axis configuration¶
| Axis | Options |
|---|---|
profile |
http-1.1 / http-2 / https-mixed (default) / ipsec / carrier-mix / enterprise-mix / + 4 more |
target_pps |
1 Mpps / 10 Mpps / 30 Mpps (default per-core) / vendor-claim |
flow_count |
1M / 10M / 40M (default) / vendor-claim |
The dashboard refuses to schedule the run if the target node
lacks DPDK-ready NIC + hugepages preallocation (DaemonSet
85-node-tuning.yaml reports readiness).
Hardware requirement — TREX cannot soft-emulate¶
DPDK polls NIC RX/TX rings from userspace. Without DPDK-compatible NIC firmware + hugepages, TREX runs at <1% nominal speed. The preflight check refuses to start if either condition fails — no silent degradation.
DPDK-ready NIC list (current): Intel X710 / XL710 / E810, Mellanox ConnectX-5 / 6, Broadcom NetXtreme-E. Other NICs may work in software fallback; the operator must opt in explicitly.
Layered vs standalone¶
- Standalone:
test_kind = tls-throughputwithengine=trex-dpdkmodifier. Pure line-rate test, no HTTP/2 / HTTP/3 browser realism. - Not layered — TREX consumes the data-plane NIC entirely; cannot share with browser-engine or k6.
Reading the report¶
Each TREX run adds an "Annex L (TREX)" block:
- DUT → vendor throughput claim (the one being verified)
- Run config → 3 axes
- Throughput envelope → sustained Mpps + Gbps + flow count achieved before the DUT degraded
- Flow distribution → per-profile L4/L7 mix as actually generated (sanity check)
- DUT envelope → CPU + memory peaks during the run
Common patterns¶
| Symptom | Likely cause |
|---|---|
| Achieved Gbps < vendor claim | Vendor claim was lab-only or older firmware — capture for sales |
| Flow count plateaus far below target | DUT session-table capacity exceeded — distinct from throughput |
| Per-flow latency p99 > 10ms | DUT inspection software-path under flow churn — major signal |
| Preflight refuses to start | Node not DPDK-ready; check hugepages + NIC firmware |
Related¶
trex/profile catalog (10 profiles)- trex-pod manifests in
k8s/trex/ - STRESS_ENGINES_CATALOG — engine matrix
- Patent #17 — TREX profile orchestration pattern
Last verified against shipping code: v3.7.0 (2026-05-12).