Skip to content

Instantly share code, notes, and snippets.

@SnabbBot
Created October 26, 2016 23:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save SnabbBot/fa5de44efec3f4920575479b34e0adb1 to your computer and use it in GitHub Desktop.
Save SnabbBot/fa5de44efec3f4920575479b34e0adb1 to your computer and use it in GitHub Desktop.
Host: Linux davos 4.4.14 x86_64 Intel(R) Xeon(R) CPU E5-2603 v2 @ 1.80GHz
Image: eugeneia/snabb-nfv-test-vanilla
Pull Request: #911
Current Head: eb9d141431f3ede56867547e852ec543593b5099
Pull Request Head: 9c4a4d1f39b89bf5b0a7f880c940060973f24e74
SNABB_PCI0=0000:03:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
SNABB_PCI1=0000:03:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
SNABB_PCI_INTEL0=0000:03:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
SNABB_PCI_INTEL1=0000:03:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
Checking for performance regressions:
BENCH basic1-100e6 -> 0.995268 of 12.68 (SD: 0.222711 )
BENCH packetblaster-64 -> 0.999568 of 13.902 (SD: 0.00748331 )
BENCH packetblaster-synth-64 -> 1.00201 of 13.956 (SD: 0.0257682 )
ERROR snabbnfv-iperf-1500 -> 0.845658 of 5.896 (SD: 0.220145 )
BENCH snabbnfv-iperf-1500-crypto -> 1.04264 of 1.032 (SD: 0.0331059 )
BENCH snabbnfv-iperf-1500-tunnel+crypto -> 1.07877 of 0.914 (SD: 0.0677052 )
BENCH snabbnfv-iperf-jumbo -> 0.984223 of 8.62 (SD: 0.531639 )
BENCH snabbnfv-loadgen-dpdk -> 0.987072 of 2.599 (SD: 0.016334 )
Checking test suite:
DIR testlog
TEST core.shm
TEST core.counter
TEST core.link
TEST core.timer
TEST core.lib
TEST core.memory
TEST core.app
TEST core.main
TEST core.histogram
TEST lib.protocol.tcp
TEST lib.protocol.ipv4
TEST lib.protocol.datagram
TEST lib.protocol.ipv6
TEST lib.hash.murmur
TEST lib.traceprof.traceprof
TEST lib.ctable
TEST lib.ipsec.esp
TEST lib.ipsec.aes_128_gcm
TEST lib.pmu
SKIPPED testlog/lib.pmu
TEST lib.ipc.shmem.shmem
TEST lib.hardware.pci
TEST lib.checksum
TEST program.snabbnfv.nfvconfig
TEST program.snabbnfv.neutron2snabb.neutron2snabb
TEST program.snabbnfv.neutron2snabb.neutron2snabb_schema
TEST apps.intel.intel_app
TEST apps.intel.intel1g
TEST apps.tap.tap
SKIPPED testlog/apps.tap.tap
TEST apps.keyed_ipv6_tunnel.tunnel
TEST apps.socket.raw
TEST apps.socket.unix
TEST apps.virtio_net.virtio_net
SKIPPED testlog/apps.virtio_net.virtio_net
TEST apps.test.synth
TEST apps.test.match
TEST apps.lwaftr.lwdebug
TEST apps.lwaftr.fragmentv4_test
TEST apps.lwaftr.binding_table
TEST apps.lwaftr.ndp
TEST apps.lwaftr.channel
TEST apps.lwaftr.conf
TEST apps.lwaftr.rangemap
TEST apps.lwaftr.podhashmap
TEST apps.packet_filter.pcap_filter
TEST apps.vlan.vlan
TEST apps.bridge.mac_table
TEST apps.rate_limiter.rate_limiter
TEST apps.ipv6.nd_light
TEST apps.vhost.vhost_user
SKIPPED testlog/apps.vhost.vhost_user
TEST lib/watchdog/selftest.sh
TEST program/lwaftr/tests/end-to-end/selftest.sh
TEST program/packetblaster/selftest.sh
TEST program/snabbnfv/selftest.sh
TEST program/snabbnfv/neutron2snabb/selftest.sh
ERROR testlog/program.snabbnfv.neutron2snabb.selftest.sh
ERROR during tests:
src/testlog/apps.bridge.mac_table:
Oct 26 2016 23:37:18 mac_table: resizing from 512 to 4096 hash buckets, new target size 2048 (1295 MAC entries, old target size 256, size/bucket overflow: true/true)
src/testlog/apps.intel.intel1g:
selftest: Intel1g
PHY Loopback set
link report:
1,285,792 sent on nic.tx -> sink.rx (loss rate: 0%)
1,288,728 sent on source.tx -> nic.rx (loss rate: 70%)
apps report:
nic
Intel1g device 0000:01:00.0: pullTxLinkFull: 0 pull: 43558 pushTxRingFull: 9390 txBytes: 77264160 rxPackets: 1285792 pushRxLinkEmpty: 0 txPackets: 1287736 pullNoTxLink: 0 push: 43558 rxBytes: 77147520
selftest: ok
stop_receive(): lostSeq 0
Stats from NIC registers:
Rx Packets= 1287596 Octets= 82406117
Tx Packets= 1287596 Octets= 82406144
Rx Good Packets= 1287595
Rx No Buffers= 0
Rx Packets to Host=1287596
Stats from counters:
Intel1g device 0000:01:00.0: pullTxLinkFull: 0 pull: 43558 pushTxRingFull: 9390 txBytes: 77264160 rxPackets: 1285792 pushRxLinkEmpty: 0 txPackets: 1287736 pullNoTxLink: 0 push: 43558 rxBytes: 77147520
source: txpackets= 1288728 rxpackets= 1287736 txdrop= 3154188
input link: txpackets= 1288728 rxpackets= 1287736 txdrop= 3154188
output link: txpackets= 1285792 rxpackets= 1285792 txdrop= 0
sink: txpackets= 1285792 rxpackets= 1285792 txdrop= 0
Processed 1.3 M 60 Byte packets in 1.00 s (rate: 1.3 Mpps, 0.58 Gbit/s, 0.23 % packet loss).
src/testlog/apps.intel.intel_app:
selftest: intel_app
100 VF initializations:
Running iterated VMDq test...
test # 1: VMDq VLAN=101; 100ms burst. packet sent: 546,822
test # 2: VMDq VLAN=102; 100ms burst. packet sent: 1,255,926
test # 3: VMDq VLAN=103; 100ms burst. packet sent: 1,955,238
test # 4: VMDq VLAN=104; 100ms burst. packet sent: 2,658,426
test # 5: VMDq VLAN=105; 100ms burst. packet sent: 3,356,922
test # 6: VMDq VLAN=106; 100ms burst. packet sent: 4,051,032
test # 7: VMDq VLAN=107; 100ms burst. packet sent: 4,739,226
test # 8: VMDq VLAN=108; 100ms burst. packet sent: 5,430,480
test # 9: VMDq VLAN=109; 100ms burst. packet sent: 6,109,188
test # 10: VMDq VLAN=110; 100ms burst. packet sent: 6,785,448
test # 11: VMDq VLAN=111; 100ms burst. packet sent: 7,455,996
test # 12: VMDq VLAN=112; 100ms burst. packet sent: 8,120,934
test # 13: VMDq VLAN=113; 100ms burst. packet sent: 8,780,058
test # 14: VMDq VLAN=114; 100ms burst. packet sent: 9,437,652
test # 15: VMDq VLAN=115; 100ms burst. packet sent: 10,091,370
test # 16: VMDq VLAN=116; 100ms burst. packet sent: 10,751,820
test # 17: VMDq VLAN=117; 100ms burst. packet sent: 11,398,398
test # 18: VMDq VLAN=118; 100ms burst. packet sent: 12,039,978
test # 19: VMDq VLAN=119; 100ms burst. packet sent: 12,678,498
test # 20: VMDq VLAN=120; 100ms burst. packet sent: 13,313,448
test # 21: VMDq VLAN=121; 100ms burst. packet sent: 13,944,930
test # 22: VMDq VLAN=122; 100ms burst. packet sent: 14,572,128
test # 23: VMDq VLAN=123; 100ms burst. packet sent: 15,197,694
test # 24: VMDq VLAN=124; 100ms burst. packet sent: 15,819,282
test # 25: VMDq VLAN=125; 100ms burst. packet sent: 16,433,526
test # 26: VMDq VLAN=126; 100ms burst. packet sent: 17,043,078
test # 27: VMDq VLAN=127; 100ms burst. packet sent: 17,645,694
test # 28: VMDq VLAN=128; 100ms burst. packet sent: 18,240,660
test # 29: VMDq VLAN=129; 100ms burst. packet sent: 18,829,608
test # 30: VMDq VLAN=130; 100ms burst. packet sent: 19,417,230
test # 31: VMDq VLAN=131; 100ms burst. packet sent: 20,002,812
test # 32: VMDq VLAN=132; 100ms burst. packet sent: 20,597,574
test # 33: VMDq VLAN=133; 100ms burst. packet sent: 21,177,138
test # 34: VMDq VLAN=134; 100ms burst. packet sent: 21,759,252
test # 35: VMDq VLAN=135; 100ms burst. packet sent: 22,336,164
test # 36: VMDq VLAN=136; 100ms burst. packet sent: 22,912,770
test # 37: VMDq VLAN=137; 100ms burst. packet sent: 23,489,886
test # 38: VMDq VLAN=138; 100ms burst. packet sent: 24,081,180
test # 39: VMDq VLAN=139; 100ms burst. packet sent: 24,670,740
test # 40: VMDq VLAN=140; 100ms burst. packet sent: 25,259,790
test # 41: VMDq VLAN=141; 100ms burst. packet sent: 25,845,780
test # 42: VMDq VLAN=142; 100ms burst. packet sent: 26,432,790
test # 43: VMDq VLAN=143; 100ms burst. packet sent: 27,015,210
test # 44: VMDq VLAN=144; 100ms burst. packet sent: 27,596,100
test # 45: VMDq VLAN=145; 100ms burst. packet sent: 28,173,522
test # 46: VMDq VLAN=146; 100ms burst. packet sent: 28,748,088
test # 47: VMDq VLAN=147; 100ms burst. packet sent: 29,319,900
test # 48: VMDq VLAN=148; 100ms burst. packet sent: 29,892,834
test # 49: VMDq VLAN=149; 100ms burst. packet sent: 30,460,770
test # 50: VMDq VLAN=150; 100ms burst. packet sent: 31,027,890
test # 51: VMDq VLAN=151; 100ms burst. packet sent: 31,600,926
test # 52: VMDq VLAN=152; 100ms burst. packet sent: 32,164,884
test # 53: VMDq VLAN=153; 100ms burst. packet sent: 32,725,782
test # 54: VMDq VLAN=154; 100ms burst. packet sent: 33,286,884
test # 55: VMDq VLAN=155; 100ms burst. packet sent: 33,844,722
test # 56: VMDq VLAN=156; 100ms burst. packet sent: 34,401,030
test # 57: VMDq VLAN=157; 100ms burst. packet sent: 34,954,380
test # 58: VMDq VLAN=158; 100ms burst. packet sent: 35,506,608
test # 59: VMDq VLAN=159; 100ms burst. packet sent: 36,056,082
test # 60: VMDq VLAN=160; 100ms burst. packet sent: 36,605,964
test # 61: VMDq VLAN=161; 100ms burst. packet sent: 37,153,194
test # 62: VMDq VLAN=162; 100ms burst. packet sent: 37,698,894
test # 63: VMDq VLAN=163; 100ms burst. packet sent: 38,243,472
test # 64: VMDq VLAN=164; 100ms burst. packet sent: 38,788,764
test # 65: VMDq VLAN=165; 100ms burst. packet sent: 39,324,672
test # 66: VMDq VLAN=166; 100ms burst. packet sent: 39,855,276
test # 67: VMDq VLAN=167; 100ms burst. packet sent: 40,377,414
test # 68: VMDq VLAN=168; 100ms burst. packet sent: 40,904,244
test # 69: VMDq VLAN=169; 100ms burst. packet sent: 41,422,710
test # 70: VMDq VLAN=170; 100ms burst. packet sent: 41,938,422
test # 71: VMDq VLAN=171; 100ms burst. packet sent: 42,452,502
test # 72: VMDq VLAN=172; 100ms burst. packet sent: 42,966,072
test # 73: VMDq VLAN=173; 100ms burst. packet sent: 43,476,174
test # 74: VMDq VLAN=174; 100ms burst. packet sent: 43,986,888
test # 75: VMDq VLAN=175; 100ms burst. packet sent: 44,495,868
test # 76: VMDq VLAN=176; 100ms burst. packet sent: 45,002,502
test # 77: VMDq VLAN=177; 100ms burst. packet sent: 45,506,892
test # 78: VMDq VLAN=178; 100ms burst. packet sent: 46,011,384
test # 79: VMDq VLAN=179; 100ms burst. packet sent: 46,512,612
test # 80: VMDq VLAN=180; 100ms burst. packet sent: 47,017,308
test # 81: VMDq VLAN=181; 100ms burst. packet sent: 47,532,102
test # 82: VMDq VLAN=182; 100ms burst. packet sent: 48,045,162
test # 83: VMDq VLAN=183; 100ms burst. packet sent: 48,557,100
test # 84: VMDq VLAN=184; 100ms burst. packet sent: 49,066,488
test # 85: VMDq VLAN=185; 100ms burst. packet sent: 49,574,346
test # 86: VMDq VLAN=186; 100ms burst. packet sent: 50,079,654
test # 87: VMDq VLAN=187; 100ms burst. packet sent: 50,581,902
test # 88: VMDq VLAN=188; 100ms burst. packet sent: 51,083,742
test # 89: VMDq VLAN=189; 100ms burst. packet sent: 51,584,256
test # 90: VMDq VLAN=190; 100ms burst. packet sent: 52,083,750
test # 91: VMDq VLAN=191; 100ms burst. packet sent: 52,583,142
test # 92: VMDq VLAN=192; 100ms burst. packet sent: 53,077,842
test # 93: VMDq VLAN=193; 100ms burst. packet sent: 53,569,074
test # 94: VMDq VLAN=194; 100ms burst. packet sent: 54,061,122
test # 95: VMDq VLAN=195; 100ms burst. packet sent: 54,548,886
test # 96: VMDq VLAN=196; 100ms burst. packet sent: 55,038,180
test # 97: VMDq VLAN=197; 100ms burst. packet sent: 55,525,536
test # 98: VMDq VLAN=198; 100ms burst. packet sent: 56,010,648
test # 99: VMDq VLAN=199; 100ms burst. packet sent: 56,493,822
test #100: VMDq VLAN=200; 100ms burst. packet sent: 56,974,854
0000:03:00.0: avg wait_lu: 232, max redos: 0, avg: 0
100 PF full cycles
Running iterated VMDq test...
test # 1: VMDq VLAN=101; 100ms burst. packet sent: 707,064
test # 2: VMDq VLAN=102; 100ms burst. packet sent: 696,354
test # 3: VMDq VLAN=103; 100ms burst. packet sent: 681,258
test # 4: VMDq VLAN=104; 100ms burst. packet sent: 689,214
test # 5: VMDq VLAN=105; 100ms burst. packet sent: 686,664
test # 6: VMDq VLAN=106; 100ms burst. packet sent: 679,320
test # 7: VMDq VLAN=107; 100ms burst. packet sent: 392,802
test # 8: VMDq VLAN=108; 100ms burst. packet sent: 475,014
test # 9: VMDq VLAN=109; 100ms burst. packet sent: 489,498
test # 10: VMDq VLAN=110; 100ms burst. packet sent: 499,698
test # 11: VMDq VLAN=111; 100ms burst. packet sent: 505,614
test # 12: VMDq VLAN=112; 100ms burst. packet sent: 490,722
test # 13: VMDq VLAN=113; 100ms burst. packet sent: 480,216
test # 14: VMDq VLAN=114; 100ms burst. packet sent: 480,216
test # 15: VMDq VLAN=115; 100ms burst. packet sent: 496,230
test # 16: VMDq VLAN=116; 100ms burst. packet sent: 512,754
test # 17: VMDq VLAN=117; 100ms burst. packet sent: 491,946
test # 18: VMDq VLAN=118; 100ms burst. packet sent: 457,980
test # 19: VMDq VLAN=119; 100ms burst. packet sent: 455,838
test # 20: VMDq VLAN=120; 100ms burst. packet sent: 490,314
test # 21: VMDq VLAN=121; 100ms burst. packet sent: 495,210
test # 22: VMDq VLAN=122; 100ms burst. packet sent: 481,542
test # 23: VMDq VLAN=123; 100ms burst. packet sent: 491,844
test # 24: VMDq VLAN=124; 100ms burst. packet sent: 477,156
test # 25: VMDq VLAN=125; 100ms burst. packet sent: 482,358
test # 26: VMDq VLAN=126; 100ms burst. packet sent: 482,256
test # 27: VMDq VLAN=127; 100ms burst. packet sent: 510,204
test # 28: VMDq VLAN=128; 100ms burst. packet sent: 481,848
test # 29: VMDq VLAN=129; 100ms burst. packet sent: 474,198
test # 30: VMDq VLAN=130; 100ms burst. packet sent: 477,870
test # 31: VMDq VLAN=131; 100ms burst. packet sent: 489,906
test # 32: VMDq VLAN=132; 100ms burst. packet sent: 471,852
test # 33: VMDq VLAN=133; 100ms burst. packet sent: 459,612
test # 34: VMDq VLAN=134; 100ms burst. packet sent: 475,626
test # 35: VMDq VLAN=135; 100ms burst. packet sent: 487,866
test # 36: VMDq VLAN=136; 100ms burst. packet sent: 494,598
test # 37: VMDq VLAN=137; 100ms burst. packet sent: 484,398
test # 38: VMDq VLAN=138; 100ms burst. packet sent: 489,600
test # 39: VMDq VLAN=139; 100ms burst. packet sent: 484,398
test # 40: VMDq VLAN=140; 100ms burst. packet sent: 481,236
test # 41: VMDq VLAN=141; 100ms burst. packet sent: 485,316
test # 42: VMDq VLAN=142; 100ms burst. packet sent: 487,764
test # 43: VMDq VLAN=143; 100ms burst. packet sent: 486,336
test # 44: VMDq VLAN=144; 100ms burst. packet sent: 495,822
test # 45: VMDq VLAN=145; 100ms burst. packet sent: 488,682
test # 46: VMDq VLAN=146; 100ms burst. packet sent: 476,238
test # 47: VMDq VLAN=147; 100ms burst. packet sent: 437,172
test # 48: VMDq VLAN=148; 100ms burst. packet sent: 651,780
test # 49: VMDq VLAN=149; 100ms burst. packet sent: 698,700
test # 50: VMDq VLAN=150; 100ms burst. packet sent: 707,574
test # 51: VMDq VLAN=151; 100ms burst. packet sent: 477,054
test # 52: VMDq VLAN=152; 100ms burst. packet sent: 478,992
test # 53: VMDq VLAN=153; 100ms burst. packet sent: 478,380
test # 54: VMDq VLAN=154; 100ms burst. packet sent: 485,622
test # 55: VMDq VLAN=155; 100ms burst. packet sent: 475,320
test # 56: VMDq VLAN=156; 100ms burst. packet sent: 490,008
test # 57: VMDq VLAN=157; 100ms burst. packet sent: 480,624
test # 58: VMDq VLAN=158; 100ms burst. packet sent: 486,642
test # 59: VMDq VLAN=159; 100ms burst. packet sent: 486,540
test # 60: VMDq VLAN=160; 100ms burst. packet sent: 478,176
test # 61: VMDq VLAN=161; 100ms burst. packet sent: 451,452
test # 62: VMDq VLAN=162; 100ms burst. packet sent: 449,514
test # 63: VMDq VLAN=163; 100ms burst. packet sent: 480,522
test # 64: VMDq VLAN=164; 100ms burst. packet sent: 484,500
test # 65: VMDq VLAN=165; 100ms burst. packet sent: 476,952
test # 66: VMDq VLAN=166; 100ms burst. packet sent: 486,642
test # 67: VMDq VLAN=167; 100ms burst. packet sent: 481,440
test # 68: VMDq VLAN=168; 100ms burst. packet sent: 488,580
test # 69: VMDq VLAN=169; 100ms burst. packet sent: 478,788
test # 70: VMDq VLAN=170; 100ms burst. packet sent: 480,828
test # 71: VMDq VLAN=171; 100ms burst. packet sent: 479,604
test # 72: VMDq VLAN=172; 100ms burst. packet sent: 479,400
test # 73: VMDq VLAN=173; 100ms burst. packet sent: 478,380
test # 74: VMDq VLAN=174; 100ms burst. packet sent: 474,096
test # 75: VMDq VLAN=175; 100ms burst. packet sent: 455,226
test # 76: VMDq VLAN=176; 100ms burst. packet sent: 451,044
test # 77: VMDq VLAN=177; 100ms burst. packet sent: 486,132
test # 78: VMDq VLAN=178; 100ms burst. packet sent: 483,276
test # 79: VMDq VLAN=179; 100ms burst. packet sent: 491,334
test # 80: VMDq VLAN=180; 100ms burst. packet sent: 483,276
test # 81: VMDq VLAN=181; 100ms burst. packet sent: 483,174
test # 82: VMDq VLAN=182; 100ms burst. packet sent: 483,990
test # 83: VMDq VLAN=183; 100ms burst. packet sent: 654,126
test # 84: VMDq VLAN=184; 100ms burst. packet sent: 702,984
test # 85: VMDq VLAN=185; 100ms burst. packet sent: 477,870
test # 86: VMDq VLAN=186; 100ms burst. packet sent: 482,154
test # 87: VMDq VLAN=187; 100ms burst. packet sent: 473,076
test # 88: VMDq VLAN=188; 100ms burst. packet sent: 470,628
test # 89: VMDq VLAN=189; 100ms burst. packet sent: 460,020
test # 90: VMDq VLAN=190; 100ms burst. packet sent: 447,576
test # 91: VMDq VLAN=191; 100ms burst. packet sent: 474,402
test # 92: VMDq VLAN=192; 100ms burst. packet sent: 481,134
test # 93: VMDq VLAN=193; 100ms burst. packet sent: 472,464
test # 94: VMDq VLAN=194; 100ms burst. packet sent: 479,400
test # 95: VMDq VLAN=195; 100ms burst. packet sent: 480,012
test # 96: VMDq VLAN=196; 100ms burst. packet sent: 478,074
test # 97: VMDq VLAN=197; 100ms burst. packet sent: 477,156
test # 98: VMDq VLAN=198; 100ms burst. packet sent: 486,438
test # 99: VMDq VLAN=199; 100ms burst. packet sent: 480,318
test #100: VMDq VLAN=200; 100ms burst. packet sent: 478,176
0000:03:00.0: avg wait_lu: 160.6, max redos: 0, avg: 0
-------
Send a bunch of packets from Am0
half of them go to nicAm1 and half go nowhere
link report:
0 sent on nicAm0.tx -> sink_ms.in1 (loss rate: 0%)
2,448,867 sent on nicAm1.tx -> sink_ms.in2 (loss rate: 0%)
4,897,938 sent on repeater_ms.output -> nicAm0.rx (loss rate: 0%)
2 sent on source_ms.output -> repeater_ms.input (loss rate: 0%)
-------
Transmitting bidirectionally between nicA and nicB
link report:
1,442,381 sent on nicA.tx -> sink.in1 (loss rate: 0%)
1,442,279 sent on nicB.tx -> sink.in2 (loss rate: 0%)
1,442,586 sent on source1.out -> nicA.rx (loss rate: 0%)
1,442,586 sent on source2.out -> nicB.rx (loss rate: 0%)
-------
Send traffic from a nicA (SF) to nicB (two VFs)
The packets should arrive evenly split between the VFs
link report:
0 sent on nicAs.tx -> sink_ms.in1 (loss rate: 0%)
1,526,481 sent on nicBm0.tx -> sink_ms.in2 (loss rate: 0%)
1,526,532 sent on nicBm1.tx -> sink_ms.in3 (loss rate: 0%)
3,053,268 sent on repeater_ms.output -> nicAs.rx (loss rate: 0%)
2 sent on source_ms.output -> repeater_ms.input (loss rate: 0%)
selftest: ok
src/testlog/apps.ipv6.nd_light:
Oct 26 2016 23:37:23 nd_light: Sending neighbor solicitation for next-hop 2001:db8::1
Oct 26 2016 23:37:23 nd_light: Sending neighbor solicitation for next-hop 2001:db8::2
Oct 26 2016 23:37:23 nd_light: Resolved next-hop 2001:db8::1 to 00:00:00:00:00:01
Oct 26 2016 23:37:23 nd_light: Resolved next-hop 2001:db8::2 to 00:00:00:00:00:02
Oct 26 2016 23:37:24 nd_light: Sending neighbor solicitation for next-hop 2001:db8::1
Oct 26 2016 23:37:24 nd_light: Sending neighbor solicitation for next-hop 2001:db8::2
src/testlog/apps.keyed_ipv6_tunnel.tunnel:
Keyed IPv6 tunnel selftest
link report:
102 sent on comparator.output -> match.comparator (loss rate: 0%)
102 sent on source.output -> tunnel.decapsulated (loss rate: 0%)
102 sent on tunnel.decapsulated -> match.rx (loss rate: 0%)
102 sent on tunnel.encapsulated -> tunnel.encapsulated (loss rate: 0%)
apps report:
match
run simple one second benchmark ...
selftest passed
src/testlog/apps.lwaftr.binding_table:
selftest: binding_table
loading compiled binding table from /tmp/lua_oIO6Vj
loading compiled binding table from /tmp/lua_9rUt2y
ok
src/testlog/apps.lwaftr.channel:
selftest: channel
selftest: channel ok
src/testlog/apps.lwaftr.conf:
selftest: conf
ok
src/testlog/apps.lwaftr.fragmentv4_test:
test: lwaftr.fragmentv4.fragment_ipv4
test: payload=1200 mtu=1500
test: payload=1200 mtu=1000
test: payload=1200 mtu=400
test: packet with "don't fragment" flag
test: length=1046 mtu=520 + reassembly
test: vlan tagging
test: lwaftr.fragmentv4.reassemble_ipv4 (no vlan tag)
test: no reassembly needed (single packet)
test: two fragments (one missing)
test: three fragments (one/two missing)
test: payload=1200 mtu=1000
test: payload=1000 mtu=400
test: lwaftr.fragmentv4.reassemble_ipv4 (vlan id=42)
test: no reassembly needed (single packet)
test: two fragments (one missing)
test: three fragments (one/two missing)
test: payload=1200 mtu=1000
test: payload=1000 mtu=400
src/testlog/apps.lwaftr.lwdebug:
src/testlog/apps.lwaftr.ndp:
src/testlog/apps.lwaftr.podhashmap:
No PMU available: single core cpu affinity required
jenkins hash: 4.96 ns per iteration (result: 2964940424)
WARNING: perfmark failed: exceeded maximum ns 4
murmur hash (32 bit): 8.69 ns per iteration (result: 2347483648)
WARNING: perfmark failed: exceeded maximum ns 8
insertion (40% occupancy): 180.66 ns per iteration (result: nil)
WARNING: perfmark failed: exceeded maximum ns 100
max displacement: 8
selfcheck: pass
population check: pass
lookup (40% occupancy): 116.39 ns per iteration (result: 1975650)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=1: 171.19 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=2: 136.66 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=4: 144.98 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=8: 125.74 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=16: 123.68 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=32: 118.58 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=64: 112.61 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=128: 111.90 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
streaming lookup, stride=256: 113.33 ns per iteration (result: -2000001)
WARNING: perfmark failed: exceeded maximum ns 100
lookup (40% occupancy): 117.04 ns per iteration (result: 1975650)
WARNING: perfmark failed: exceeded maximum ns 100
src/testlog/apps.lwaftr.rangemap:
No PMU available: single core cpu affinity required
lookup: 11.26 ns per iteration (result: 0)
WARNING: perfmark failed: exceeded maximum ns 10
src/testlog/apps.packet_filter.pcap_filter:
selftest: pcap_filter
Run for 1 second (stateful = false)...
link report:
264,474 sent on pcap_filter.output -> sink.input (loss rate: 0%)
7,096,752 sent on repeater.output -> pcap_filter.input (loss rate: 0%)
161 sent on source.output -> repeater.input (loss rate: 0%)
ok: accepted 3.7267% of inputs (within tolerance)
Run for 1 second (stateful = true)...
link report:
291,551 sent on pcap_filter.output -> sink.input (loss rate: 0%)
3,911,700 sent on repeater.output -> pcap_filter.input (loss rate: 0%)
161 sent on source.output -> repeater.input (loss rate: 0%)
ok: accepted 7.4533% of inputs (within tolerance)
selftest: ok
src/testlog/apps.rate_limiter.rate_limiter:
Rate limiter selftest
test effective rate, non-busy loop
load: time: 1.00s fps: 1,324,064 fpGbps: 0.731 fpb: 102 bpp: 60 sleep: 0 us
load: time: 1.00s fps: 1,325,696 fpGbps: 0.732 fpb: 102 bpp: 60 sleep: 0 us
load: time: 1.00s fps: 1,324,061 fpGbps: 0.731 fpb: 102 bpp: 60 sleep: 0 us
load: time: 1.00s fps: 1,326,006 fpGbps: 0.732 fpb: 102 bpp: 60 sleep: 0 us
load: time: 0.00s fps: NaN fpGbps: nan fpb: NaN bpp: - sleep: 0 us
configured rate is 200000 bytes per second
effective rate is 209993 bytes per second
measure throughput on heavy load...
elapsed time 0.402117371 seconds
packets received 10200000 25 Mpps
configured rate is 1200000000 bytes per second
effective rate is 1497334321 bytes per second
throughput is 24 Mpps
selftest passed
src/testlog/apps.socket.raw:
link report:
1 sent on lo.tx -> match.rx (loss rate: 0%)
apps report:
match
selftest passed
src/testlog/apps.socket.unix:
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
client tx: 420 hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello hello
link report:
86 sent on client.tx -> print_client_tx.rx (loss rate: 0%)
7,113 sent on say_hello.tx -> client.rx (loss rate: 20%)
87 sent on server.tx -> server.rx (loss rate: 0%)
src/testlog/apps.tap.tap:
EXITCODE: 43
src/testlog/apps.test.match:
load: time: 0.00s fps: 69,841 fpGbps: 0.031 fpb: 205 bpp: 8 sleep: 0 us
load: time: 0.02s fps: 4,701 fpGbps: 0.002 fpb: 104 bpp: 8 sleep: 0 us
load: time: 0.01s fps: 60,588 fpGbps: 0.027 fpb: 308 bpp: 9 sleep: 0 us
src/testlog/apps.test.synth:
link report:
6 sent on reader.output -> match.comparator (loss rate: 0%)
306 sent on synth.output -> match.rx (loss rate: 0%)
apps report:
match
src/testlog/apps.vhost.vhost_user:
selftest: vhost_user
SNABB_TEST_VHOST_USER_SOCKET was not set
Test skipped
EXITCODE: 43
src/testlog/apps.virtio_net.virtio_net:
SNABB_TEST_VIRTIO_PCIDEV was not set
Test skipped
EXITCODE: 43
src/testlog/apps.vlan.vlan:
source sent: 13913004
sink received: 13913004
Sucessfully tagged/untagged all potential VLAN tags (0-4095)
src/testlog/core.app:
Restarting app2 (died at 7812707.974833: core/app.lua:505: Push error.)
Restarting app1 (died at 7812707.974833: core/app.lua:499: Pull error.)
Restarting app2 (died at 7812709.974925: core/app.lua:505: Push error.)
Restarting app1 (died at 7812709.974925: core/app.lua:499: Pull error.)
Restarting app3 (died at 7812711.973902: core/app.lua:511: Report error.)
Restarting app2 (died at 7812711.975061: core/app.lua:505: Push error.)
Restarting app1 (died at 7812711.975061: core/app.lua:499: Pull error.)
selftest: app
empty -> c1
c1 -> c1
c1 -> c2
c2 -> c1
c1 -> empty
c_fail
apps report:
app3
app2 [dead: core/app.lua:505: Push error.]
app1 [dead: core/app.lua:499: Pull error.]
apps report:
app3
app2 [dead: core/app.lua:505: Push error.]
app1 [dead: core/app.lua:499: Pull error.]
src/testlog/core.counter:
selftest: core.counter
selftest ok
src/testlog/core.histogram:
selftest: histogram
selftest ok
src/testlog/core.lib:
selftest: lib
Testing equal
Testing load_string
Testing load/store_conf
Testing csum
Testing hex(un)dump
Testing ntohl
Testing parse
src/testlog/core.link:
selftest: link
selftest OK
src/testlog/core.main:
selftest
src/testlog/core.memory:
selftest: memory
Kernel vm.nr_hugepages: 4096
Allocating a 2MB HugeTLB: Got 2MB
Physical address: 0x00076ee00000
Virtual address: 0x50076ee00000
Allocating a 2MB HugeTLB: Got 2MB
Physical address: 0x00076f000000
Virtual address: 0x50076f000000
Allocating a 2MB HugeTLB: Got 2MB
Physical address: 0x00076f200000
Virtual address: 0x50076f200000
Allocating a 2MB HugeTLB: Got 2MB
Physical address: 0x00076f400000
Virtual address: 0x50076f400000
Kernel vm.nr_hugepages: 4096
HugeTLB page allocation OK.
src/testlog/core.shm:
selftest: shm
checking resolve..
checking shared memory..
create shm/selftest/obj
checking exists..
checking many objects..
10000 objects created
10000 objects unmapped
selftest ok
src/testlog/core.timer:
selftest: timer
ok (973,855 callbacks in 0.2888 seconds)
src/testlog/lib.checksum:
selftest: checksum
no avx2
sse2: 1000/1000
selftest: tcp/ipv4
selftest: ok
src/testlog/lib.ctable:
selftest: ctable
selftest: ok
src/testlog/lib.hardware.pci:
selftest: pci
pciaddress model interface status driver usable
01:00.0 Intel 350 - - apps.intel.intel1g yes
01:00.1 Intel 350 - - apps.intel.intel1g yes
03:00.0 Intel 82599 SFP - - apps.intel.intel_app yes
03:00.1 Intel 82599 SFP - - apps.intel.intel_app yes
05:00.0 Intel 350 - - apps.intel.intel1g yes
05:00.1 Intel 350 - - apps.intel.intel1g yes
0a:00.0 Intel 350 - - apps.intel.intel1g yes
0a:00.1 Intel 350 - - apps.intel.intel1g yes
81:00.0 Intel 82574L - - apps.intel.intel_app yes
82:00.0 Intel 82599 SFP - - apps.intel.intel_app yes
82:00.1 Intel 82599 SFP - - apps.intel.intel_app yes
84:00.0 Intel 82599 SFP - - apps.intel.intel_app yes
84:00.1 Intel 82599 SFP - - apps.intel.intel_app yes
86:00.0 Intel 82599 SFP - - apps.intel.intel_app yes
86:00.1 Intel 82599 SFP - - apps.intel.intel_app yes
88:00.0 SFN7122F - - apps.solarflare.solarflare yes
88:00.1 SFN7122F - - apps.solarflare.solarflare yes
89:00.0 Intel 82599 T3 - - apps.intel.intel_app yes
89:00.1 Intel 82599 T3 - - apps.intel.intel_app yes
src/testlog/lib.hash.murmur:
Sleftest hash MurmurHash3_x86_32
Passed
Sleftest hash MurmurHash3_x64_128
Passed
src/testlog/lib.ipc.shmem.shmem:
ok
src/testlog/lib.ipsec.aes_128_gcm:
Test vector: 1
ctext+tag FE CF 53 7E 72 9D 5B 07 DC 30 DF 52 8D D2 2B 76 8D 1B 98 73 66 96 A6 FD 34 85 09 FA 13 CE AC 34 CF A2 43 6F 14 A3 F3 CF 65 92 5B F1 F4 A1 3C 5D 15 B2 1E 18 84 F5 FF 62 47 AE AB B7 86 B9 3B CE 61 BC 17 D7 68 FD 97 32 45 90 18 14 8F 6C BE 72 2F D0 47 96 56 2D FD B4
is FE CF 53 7E 72 9D 5B 07 DC 30 DF 52 8D D2 2B 76 8D 1B 98 73 66 96 A6 FD 34 85 09 FA 13 CE AC 34 CF A2 43 6F 14 A3 F3 CF 65 92 5B F1 F4 A1 3C 5D 15 B2 1E 18 84 F5 FF 62 47 AE AB B7 86 B9 3B CE 61 BC 17 D7 68 FD 97 32 45 90 18 14 8F 6C BE 72 2F D0 47 96 56 2D FD B4
plaintext 45 00 00 48 69 9A 00 00 80 11 4D B7 C0 A8 01 02 C0 A8 01 01 0A 9B F1 56 38 D3 01 00 00 01 00 00 00 00 00 00 04 5F 73 69 70 04 5F 75 64 70 03 73 69 70 09 63 79 62 65 72 63 69 74 79 02 64 6B 00 00 21 00 01 01 02 02 01
is 45 00 00 48 69 9A 00 00 80 11 4D B7 C0 A8 01 02 C0 A8 01 01 0A 9B F1 56 38 D3 01 00 00 01 00 00 00 00 00 00 04 5F 73 69 70 04 5F 75 64 70 03 73 69 70 09 63 79 62 65 72 63 69 74 79 02 64 6B 00 00 21 00 01 01 02 02 01
Test vector: 2
ctext+tag FB A2 CA A4 85 3C F9 F0 F2 2C B1 0D 86 DD 83 B0 FE C7 56 91 CF 1A 04 B0 0D 11 38 EC 9C 35 79 17 65 AC BD 87 01 AD 79 84 5B F9 FE 3F BA 48 7B C9 17 55 E6 66 2B 4C 8D 0D 1F 5E 22 73 95 30 32 0A E0 D7 31 CC 97 8E CA FA EA E8 8F 00 E8 0D 6E 48
is FB A2 CA A4 85 3C F9 F0 F2 2C B1 0D 86 DD 83 B0 FE C7 56 91 CF 1A 04 B0 0D 11 38 EC 9C 35 79 17 65 AC BD 87 01 AD 79 84 5B F9 FE 3F BA 48 7B C9 17 55 E6 66 2B 4C 8D 0D 1F 5E 22 73 95 30 32 0A E0 D7 31 CC 97 8E CA FA EA E8 8F 00 E8 0D 6E 48
plaintext 45 00 00 3C 99 C3 00 00 80 01 CB 7C 40 67 93 18 01 01 01 01 08 00 08 5C 02 00 43 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 61 62 63 64 65 66 67 68 69 01 02 02 01
is 45 00 00 3C 99 C3 00 00 80 01 CB 7C 40 67 93 18 01 01 01 01 08 00 08 5C 02 00 43 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 61 62 63 64 65 66 67 68 69 01 02 02 01
Test vector: 3
ctext+tag FB A2 CA 84 5E 5D F9 F0 F2 2C 3E 6E 86 DD 83 1E 1F C6 57 92 CD 1A F9 13 0E 13 79 ED 36 9F 07 1F 35 E0 34 BE 95 F1 12 E4 E7 D0 5D 35
is FB A2 CA 84 5E 5D F9 F0 F2 2C 3E 6E 86 DD 83 1E 1F C6 57 92 CD 1A F9 13 0E 13 79 ED 36 9F 07 1F 35 E0 34 BE 95 F1 12 E4 E7 D0 5D 35
plaintext 45 00 00 1C 42 A2 00 00 80 01 44 1F 40 67 93 B6 E0 00 00 02 0A 00 F5 FF 01 02 02 01
is 45 00 00 1C 42 A2 00 00 80 01 44 1F 40 67 93 B6 E0 00 00 02 0A 00 F5 FF 01 02 02 01
Test vector: 4
ctext+tag 18 A6 FD 42 F7 2C BF 4A B2 A2 EA 90 1F 73 D8 14 E3 E7 F2 43 D9 54 12 E1 C3 49 C1 D2 FB EC 16 8F 91 90 FE EB AF 2C B0 19 84 E6 58 63 96 5D 74 72 B7 9D A3 45 E0 E7 80 19 1F 0D 2F 0E 0F 49 6C 22 6F 21 27 B2 7D B3 57 24 E7 84 5D 68 65 1F 57 E6 5F 35 4F 75 FF 17 01 57 69 62 34 36
is 18 A6 FD 42 F7 2C BF 4A B2 A2 EA 90 1F 73 D8 14 E3 E7 F2 43 D9 54 12 E1 C3 49 C1 D2 FB EC 16 8F 91 90 FE EB AF 2C B0 19 84 E6 58 63 96 5D 74 72 B7 9D A3 45 E0 E7 80 19 1F 0D 2F 0E 0F 49 6C 22 6F 21 27 B2 7D B3 57 24 E7 84 5D 68 65 1F 57 E6 5F 35 4F 75 FF 17 01 57 69 62 34 36
plaintext 45 00 00 49 33 BA 00 00 7F 11 91 06 C3 FB 1D 10 C2 B1 D3 26 C0 28 31 CE 00 35 DD 7B 80 03 02 D5 00 00 4E 20 00 1E 8C 18 D7 5B 81 DC 91 BA A0 47 6B 91 B9 24 B2 80 38 9D 92 C9 63 BA C0 46 EC 95 9B 62 66 C0 47 22 B1 49 23 01 01 01
is 45 00 00 49 33 BA 00 00 7F 11 91 06 C3 FB 1D 10 C2 B1 D3 26 C0 28 31 CE 00 35 DD 7B 80 03 02 D5 00 00 4E 20 00 1E 8C 18 D7 5B 81 DC 91 BA A0 47 6B 91 B9 24 B2 80 38 9D 92 C9 63 BA C0 46 EC 95 9B 62 66 C0 47 22 B1 49 23 01 01 01
Test vector: 5
ctext+tag FB A2 CA D1 2F C1 F9 F0 0D 3C EB F3 05 41 0D B8 3D 77 84 B6 07 32 3D 22 0F 24 B0 A9 7D 54 18 28 00 CA DB 0F 68 D9 9E F0 E0 C0 C8 9A E9 BE A8 88 4E 52 D6 5B C1 AF D0 74 0F 74 24 44 74 7B 5B 39 AB 53 31 63 AA D4 55 0E E5 16 09 75 CD B6 08 C5 76 91 89 60 97 63 B8 E1 8C AA 81 E2
is FB A2 CA D1 2F C1 F9 F0 0D 3C EB F3 05 41 0D B8 3D 77 84 B6 07 32 3D 22 0F 24 B0 A9 7D 54 18 28 00 CA DB 0F 68 D9 9E F0 E0 C0 C8 9A E9 BE A8 88 4E 52 D6 5B C1 AF D0 74 0F 74 24 44 74 7B 5B 39 AB 53 31 63 AA D4 55 0E E5 16 09 75 CD B6 08 C5 76 91 89 60 97 63 B8 E1 8C AA 81 E2
plaintext 45 00 00 49 33 3E 00 00 7F 11 91 82 C3 FB 1D 10 C2 B1 D3 26 C0 28 31 CE 00 35 CB 45 80 03 02 5B 00 00 01 E0 00 1E 8C 18 D6 57 59 D5 22 84 A0 35 2C 71 47 5C 88 80 39 1C 76 4D 6E 5E E0 49 6B 32 5A E2 70 C0 38 99 49 39 15 01 01 01
is 45 00 00 49 33 3E 00 00 7F 11 91 82 C3 FB 1D 10 C2 B1 D3 26 C0 28 31 CE 00 35 CB 45 80 03 02 5B 00 00 01 E0 00 1E 8C 18 D6 57 59 D5 22 84 A0 35 2C 71 47 5C 88 80 39 1C 76 4D 6E 5E E0 49 6B 32 5A E2 70 C0 38 99 49 39 15 01 01 01
Test vector: 6
ctext+tag 29 C9 FC 69 A1 97 D0 38 CC DD 14 E2 DD FC AA 05 43 33 21 64 41 25 03 52 43 03 ED 3C 6C 5F 28 38 43 AF 8C 3E
is 29 C9 FC 69 A1 97 D0 38 CC DD 14 E2 DD FC AA 05 43 33 21 64 41 25 03 52 43 03 ED 3C 6C 5F 28 38 43 AF 8C 3E
plaintext 74 6F 01 62 65 01 6F 72 01 6E 6F 74 01 74 6F 01 62 65 00 01
is 74 6F 01 62 65 01 6F 72 01 6E 6F 74 01 74 6F 01 62 65 00 01
Test vector: 7
ctext+tag FB A2 CA A8 C6 C5 F9 F0 F2 2C A5 4A 06 12 10 AD 3F 6E 57 91 CF 1A CA 21 0D 11 7C EC 9C 35 79 17 65 AC BD 87 01 AD 79 84 5B F9 FE 3F BA 48 7B C9 63 21 93 06 84 EE CA DB 56 91 25 46 E7 A9 5C 97 40 D7 CB 05
is FB A2 CA A8 C6 C5 F9 F0 F2 2C A5 4A 06 12 10 AD 3F 6E 57 91 CF 1A CA 21 0D 11 7C EC 9C 35 79 17 65 AC BD 87 01 AD 79 84 5B F9 FE 3F BA 48 7B C9 63 21 93 06 84 EE CA DB 56 91 25 46 E7 A9 5C 97 40 D7 CB 05
plaintext 45 00 00 30 DA 3A 00 00 80 01 DF 3B C0 A8 00 05 C0 A8 00 01 08 00 C6 CD 02 00 07 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 01 02 02 01
is 45 00 00 30 DA 3A 00 00 80 01 DF 3B C0 A8 00 05 C0 A8 00 01 08 00 C6 CD 02 00 07 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 01 02 02 01
Test vector: 8
ctext+tag 74 75 2E 8A EB 5D 87 3C D7 C0 F4 AC C3 6C 4B FF 84 B7 D7 B9 8F 0C A8 B6 AC DA 68 94 BC 61 90 69 EF 9C BC 28 FE 1B 56 A7 C4 E0 D5 8C 86 CD 2B C0
is 74 75 2E 8A EB 5D 87 3C D7 C0 F4 AC C3 6C 4B FF 84 B7 D7 B9 8F 0C A8 B6 AC DA 68 94 BC 61 90 69 EF 9C BC 28 FE 1B 56 A7 C4 E0 D5 8C 86 CD 2B C0
plaintext 08 00 C6 CD 02 00 07 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 01 02 02 01
is 08 00 C6 CD 02 00 07 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 01 02 02 01
Encrypted 100000000 bytes in 0.18639659881592 seconds
Decrypted 100000000 bytes in 0.18623877502978 seconds
Block: 6bc1bee22e409f96e93d7e117393172a 3ad77bb40d7a3660a89ecaf32466ef97
Block: ae2d8a571e03ac9c9eb76fac45af8e51 f5d3d58503b9699de785895a96fdbaaf
Block: 30c81c46a35ce411e5fbc1191a0a52ef 43b1cd7f598ece23881b00e3ed030688
Block: f69f2445df4f9b17ad2b417be66c3710 7b0c785e27e8ad3f8223207104725dd4
src/testlog/lib.ipsec.esp:
original 00 00 00 00 00 00 00 00 00 00 00 00 86 DD 60 00 00 00 00 40 3B 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 78 79 7A 0A 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55 56 57 58 59 5A 0A 30 31 32 33 34 35 36 37 38 39
encrypted 00 00 00 00 00 00 00 00 00 00 00 00 86 DD 60 00 00 00 00 64 32 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 01 00 00 00 00 00 00 00 A0 56 61 AD BC EC E6 2D B4 19 A4 BB C3 C6 D6 70 0A D3 33 3D F9 09 D6 D1 3F 40 B4 28 8A 00 EE D9 68 74 F3 45 DA 5C A9 67 9F 7A 6C A4 A2 29 C6 04 B1 D1 28 61 F5 AD 68 FF 4E FC CE BC 1F 39 3C EA F6 CD 4D A3 61 14 B9 0E F1 32 45 98 11 2A 70 A3 63 23 B0 5E
decrypted 00 00 00 00 00 00 00 00 00 00 00 00 86 DD 60 00 00 00 00 40 3B 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 78 79 7A 0A 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55 56 57 58 59 5A 0A 30 31 32 33 34 35 36 37 38 39
original 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 00 00 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33
encrypted 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 00 24 32 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 00 00 00 00 00 00 00 02 02 00 00 00 00 00 00 00 4E 94 30 C2 33 98 48 2A DC B2 70 1B 93 83 BF 69 44 D1 6A EF
decrypted 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 00 00 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=1, flow_id=0, reason='integrity error')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=2, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=4, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=6, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=8, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=10, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=12, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=14, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=4294967290, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=4294967292, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=4294967294, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=0, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=2, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=4, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=6, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=8, flow_id=0, reason='replayed')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=43, flow_id=0, reason='integrity error')
Oct 26 2016 23:36:15 esp: Rejecting packet (SPI=0, src_addr='::', dst_addr='::', seq_low=43, flow_id=0, reason='integrity error')
src/testlog/lib.pmu:
selftest: pmu
PMU not available:
single core cpu affinity required
selftest skipped
EXITCODE: 43
src/testlog/lib.protocol.datagram:
src/testlog/lib.protocol.ipv4:
src/testlog/lib.protocol.ipv6:
src/testlog/lib.protocol.tcp:
src/testlog/lib.traceprof.traceprof:
traceprof report (recorded 1000/1282 samples):
43% TRACE 3:LOOP ->loop
39% TRACE 4 ->3
16% TRACE 3 ->loop
src/testlog/lib.watchdog.selftest.sh:
[testing alert]
Set timeout, now sleeping...
Resetting watchdog.
Exit normally.
[testing alert_stop]
Set timeout, now sleeping...
Stopping watchdog.
Exit normally.
[testing ualert]
Set timeout, now sleeping...
Resetting watchdog.
Exit normally.
[testing ualert_stop]
Set timeout, now sleeping...
Stopping watchdog.
Exit normally.
src/testlog/program.lwaftr.tests.end-to-end.selftest.sh:
Testing: from-internet IPv4 packet found in the binding table.
loading source binding table from ../data/binding-table.txt
done
Test passed
Testing: from-internet IPv4 packet found in the binding table with vlan tag.
done
Test passed
Testing: NDP: incoming NDP Neighbor Solicitation
done
Test passed
Testing: NDP: incoming NDP Neighbor Solicitation, secondary IP
done
Test passed
Testing: NDP: incoming NDP Neighbor Solicitation, non-lwAFTR IP
done
Test passed
Testing: NDP: IPv6 but not eth addr of next IPv6 hop set, do Neighbor Solicitation
done
Test passed
Testing: NDP: Without receiving NA, next_hop6_mac not set
done
Test passed
Testing: NDP: With receiving NA, next_hop6_mac not initially set
done
Test passed
Testing: IPv6 packet, next hop NA, packet, eth addr not set in configuration.
done
Test passed
Testing: from-internet IPv4 fragmented packets found in the binding table.
done
Test passed
Testing: traffic class mapping
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, original TTL=1.
done
Test passed
Testing: from-B4 IPv4 fragmentation (2)
done
Test passed
Testing: from-B4 IPv4 fragmentation (3)
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, needs IPv6 fragmentation (2).
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, needs IPv6 fragmentation (3).
done
Test passed
Testing: IPv6 reassembly (followed by decapsulation).
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, needs IPv6 fragmentation, DF set, ICMP-3,4.
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table, no ICMP.
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table (IPv4 matches, but port doesn't), no ICMP.
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table (ICMP-on-fail).
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table (IPv4 matches, but port doesn't) (ICMP-on-fail).
done
Test passed
Testing: from-to-b4 IPv6 packet NOT found in the binding table, no ICMP.
done
Test passed
Testing: from-b4 to-internet IPv6 packet found in the binding table.
done
Test passed
Testing: from-b4 to-internet IPv6 packet found in the binding table with vlan tag.
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table, no ICMP
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table, (IPv4 matches, but port doesn't), no ICMP
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table (ICMP-on-fail)
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table (IPv4 matches, but port doesn't) (ICMP-on-fail)
done
Test passed
Testing: from-to-b4 IPv6 packet, no hairpinning
done
Test passed
Testing: from-to-b4 IPv6 packet, with hairpinning
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping, with hairpinning
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping reply, with hairpinning
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping, with hairpinning, unbound
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping reply, with hairpinning, port 0 not bound
done
Test passed
Testing: from-to-b4 TCP packet, with hairpinning, TTL 1
done
Test passed
Testing: from-to-b4 IPv6 packet, with hairpinning, with vlan tag
done
Test passed
Testing: from-b4 IPv6 packet, with hairpinning, to B4 with custom lwAFTR address
done
Test passed
Testing: from-b4 IPv6 packet, with hairpinning, from B4 with custom lwAFTR address
done
Test passed
Testing: from-b4 IPv6 packet, with hairpinning, different non-default lwAFTR addresses
done
Test passed
Testing: from-internet bound IPv4 UDP packet
done
Test passed
Testing: unfragmented IPv4 UDP -> outgoing IPv6 UDP fragments
done
Test passed
Testing: IPv6 incoming UDP fragments -> unfragmented IPv4
done
Test passed
Testing: IPv6 incoming UDP fragments -> outgoing IPv4 UDP fragments
done
Test passed
Testing: IPv4 incoming UDP fragments -> outgoing IPv6 UDP fragments
done
Test passed
Testing: incoming ICMPv4 echo request, matches binding table
done
Test passed
Testing: incoming ICMPv4 echo request, matches binding table, bad checksum
done
Test passed
Testing: incoming ICMPv4 echo request, matches binding table, dropping ICMP
done
Test passed
Testing: incoming ICMPv4 echo request, doesn't match binding table
done
Test passed
Testing: incoming ICMPv4 echo reply, matches binding table
done
Test passed
Testing: incoming ICMPv4 3,4 'too big' notification, matches binding table
done
Test passed
Testing: incoming ICMPv6 1,3 destination/address unreachable, OPE from internet
done
Test passed
Testing: incoming ICMPv6 2,0 'too big' notification, OPE from internet
done
Test passed
Testing: incoming ICMPv6 3,0 hop limit exceeded, OPE from internet
done
Test passed
Testing: incoming ICMPv6 3,1 frag reasembly time exceeded, OPE from internet
done
Test passed
Testing: incoming ICMPv6 4,3 parameter problem, OPE from internet
done
Test passed
Testing: incoming ICMPv6 3,0 hop limit exceeded, OPE hairpinned
done
Test passed
Testing: ingress-filter: from-internet (IPv4) packet found in binding table (ACCEPT)
done
Test passed
Testing: ingress-filter: from-internet (IPv4) packet found in binding table (DROP)
done
Test passed
Testing: ingress-filter: from-b4 (IPv6) packet found in binding table (ACCEPT)
done
Test passed
Testing: ingress-filter: from-b4 (IPv6) packet found in binding table (DROP)
done
Test passed
Testing: egress-filter: to-internet (IPv4) (ACCEPT)
done
Test passed
Testing: egress-filter: to-internet (IPv4) (DROP)
done
Test passed
Testing: egress-filter: to-b4 (IPv4) (ACCEPT)
done
Test passed
Testing: egress-filter: to-b4 (IPv4) (DROP)
done
Test passed
Testing: ICMP Echo to AFTR (IPv4)
done
Test passed
Testing: ICMP Echo to AFTR (IPv4) + data
done
Test passed
Testing: ICMP Echo to AFTR (IPv6)
done
Test passed
Testing: ICMP Echo to AFTR (IPv6) + data
done
Test passed
All end-to-end lwAFTR tests passed.
Testing: from-internet IPv4 packet found in the binding table.
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: traffic class mapping
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: NDP: incoming NDP Neighbor Solicitation
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: NDP: incoming NDP Neighbor Solicitation, secondary IP
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: NDP: incoming NDP Neighbor Solicitation, non-lwAFTR IP
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: NDP: IPv6 but not eth addr of next IPv6 hop set, do Neighbor Solicitation
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: NDP: Without receiving NA, next_hop6_mac not set
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: NDP: With receiving NA, next_hop6_mac not initially set
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, original TTL=1.
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 fragmented packets found in the binding table
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-B4 IPv4 fragmentation (2)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-B4 IPv4 fragmentation (3)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, needs IPv6 fragmentation (2).
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, needs IPv6 fragmentation (3).
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: IPv6 reassembly (followed by decapsulation).
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet found in the binding table, needs IPv6 fragmentation, DF set, ICMP-3,4.
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table, no ICMP.
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table (matches IPv4, but not port), no ICMP.
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table (ICMP-on-fail).
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet IPv4 packet NOT found in the binding table (matches IPv4, but not port) (ICMP-on-fail).
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 IPv6 packet NOT found in the binding table, no ICMP.
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 to-internet IPv6 packet found in the binding table.
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table, no ICMP
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table (matches IPv4, but not port), no ICMP
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table (ICMP-on-fail)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 to-internet IPv6 packet NOT found in the binding table (matches IPv4, but not port) (ICMP-on-fail)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 IPv6 packet, no hairpinning
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 IPv6 packet, with hairpinning
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping, with hairpinning
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping reply, with hairpinning
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping, with hairpinning, unbound
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 tunneled ICMPv4 ping reply, with hairpinning, port 0 not bound
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-to-b4 TCP packet, with hairpinning, TTL 1
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 IPv6 packet, with hairpinning, to B4 with custom lwAFTR address
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 IPv6 packet, with hairpinning, from B4 with custom lwAFTR address
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-b4 IPv6 packet, with hairpinning, different non-default lwAFTR addresses
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: from-internet bound IPv4 UDP packet
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: unfragmented IPv4 UDP -> outgoing IPv6 UDP fragments
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: IPv6 incoming UDP fragments -> unfragmented IPv4
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: IPv6 incoming UDP fragments -> outgoing IPv4 UDP fragments
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: IPv4 incoming UDP fragments -> outgoing IPv6 UDP fragments
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv4 echo request, matches binding table
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv4 echo request, matches binding table
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv4 echo request, matches binding table, dropping ICMP
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv4 echo request, doesn't match binding table
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv4 echo reply, matches binding table
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv4 3,4 'too big' notification, matches binding table
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv6 1,3 destination/address unreachable, OPE from internet
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv6 2,0 'too big' notification, OPE from internet
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv6 3,0 hop limit exceeded, OPE from internet
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv6 3,1 frag reasembly time exceeded, OPE from internet
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv6 4,3 parameter problem, OPE from internet
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: incoming ICMPv6 3,0 hop limit exceeded, OPE hairpinned
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ingress-filter: from-internet (IPv4) packet found in binding table (ACCEPT)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ingress-filter: from-internet (IPv4) packet found in binding table (DROP)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ingress-filter: from-b4 (IPv6) packet found in binding table (ACCEPT)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ingress-filter: from-b4 (IPv6) packet found in binding table (DROP)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: egress-filter: to-internet (IPv4) (ACCEPT)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: egress-filter: to-internet (IPv4) (DROP)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: egress-filter: to-b4 (IPv4) (ACCEPT)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: egress-filter: to-b4 (IPv4) (DROP)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ICMP Echo to AFTR (IPv4)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ICMP Echo to AFTR (IPv4) + data
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ICMP Echo to AFTR (IPv6)
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
Testing: ICMP Echo to AFTR (IPv6) + data
loading compiled binding table from ../data/binding-table.o
compiled binding table ../data/binding-table.o is up to date.
done
Test passed
All end-to-end lwAFTR vlan tests passed.
src/testlog/program.packetblaster.selftest.sh:
selftest: packetblaster
testing lwaftr pcap program/packetblaster/lwaftr/test_lwaftr_1.pcap ...
packetblaster lwaftr: Sending 1 clients at 1.000 MPPS to /tmp/lwaftr1209.pcap
IPv6: 2001:db8:: > 2001:db8:ffff::100: 10.0.0.0:1024 > 8.8.8.8:12345
source IPv6 and source IPv4/Port adjusted per client
IPv6 packet sizes: 104,104,104,104,104,104,104,634,634,634,1540
IPv4: 8.8.8.8:12345 > 10.0.0.0:1024
destination IPv4 and Port adjusted per client
IPv4 packet sizes: 64,64,64,64,64,64,64,594,594,594,1500
generated 22 packets
testing lwaftr pcap program/packetblaster/lwaftr/test_lwaftr_2.pcap ...
Warning: Increasing IPv4 packet size to 28
packetblaster lwaftr: Sending 2 clients at 1.000 MPPS to /tmp/lwaftr1209.pcap
IPv6: 2001:db8:: > 2001:db8:ffff::100: 10.0.0.0:1024 > 8.8.8.8:12345
source IPv6 and source IPv4/Port adjusted per client
IPv6 packet sizes: 68
IPv4: 8.8.8.8:12345 > 10.0.0.0:1024
destination IPv4 and Port adjusted per client
IPv4 packet sizes: 28
generated 4 packets
cmp: EOF on program/packetblaster/lwaftr/test_lwaftr_2.pcap
packetblaster lwaftr: Sending 1 clients at 1.000 MPPS to tap0
IPv6: 2001:db8:: > 2001:db8:ffff::100: 10.0.0.0:1024 > 8.8.8.8:12345
source IPv6 and source IPv4/Port adjusted per client
IPv6 packet sizes: 104,104,104,104,104,104,104,634,634,634,1540
IPv4: 8.8.8.8:12345 > 10.0.0.0:1024
destination IPv4 and Port adjusted per client
IPv4 packet sizes: 64,64,64,64,64,64,64,594,594,594,1500
tap0 Link encap:Ethernet HWaddr ba:e9:d2:09:77:42
inet6 addr: fe80::b8e9:d2ff:fe09:7742/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:708939 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:262387818 (262.3 MB) TX bytes:258 (258.0 B)
filename=program/snabbnfv/test_fixtures/pcap/64.pcap
packetblaster lwaftr: Sending 1 clients at 1.000 MPPS to 0000:03:00.0
IPv6: 2001:db8:: > 2001:db8:ffff::100: 10.0.0.0:1024 > 8.8.8.8:12345
source IPv6 and source IPv4/Port adjusted per client
IPv6 packet sizes: 104,104,104,104,104,104,104,634,634,634,1540
IPv4: 8.8.8.8:12345 > 10.0.0.0:1024
destination IPv4 and Port adjusted per client
IPv4 packet sizes: 64,64,64,64,64,64,64,594,594,594,1500
selftest: ok
src/testlog/program.snabbnfv.neutron2snabb.neutron2snabb:
selftest: neutron2snabb
ok: {{direction='ingress', ethertype='IPv6'}}
=> ip6
ok: {{direction='ingress', ethertype='IPv4'}}
=> (arp or ip)
ok: {{direction='ingress', ethertype='IPv4', protocol='tcp'}}
=> (arp or (ip and tcp))
ok: {{direction='ingress', ethertype='IPv4', protocol='udp'}}
=> (arp or (ip and udp))
ok: {{direction='ingress', ethertype='IPv4', protocol='udp', port_range_min=1000}}
=> (arp or (ip and udp and dst portrange 1000-1000))
ok: {{direction='ingress', ethertype='IPv4', protocol='udp', port_range_max=2000}}
=> (arp or (ip and udp and dst portrange 2000-2000))
ok: {{direction='ingress', ethertype='IPv4', protocol='tcp', port_range_min=1000, port_range_max=2000}}
=> (arp or (ip and tcp and dst portrange 1000-2000))
ok: {{direction='ingress', ethertype='IPv6', protocol='tcp'}, {direction='ingress', ethertype='IPv4', protocol='udp', remote_ip_prefix='10.0.0.0/8'}}
=> ((ip6 and tcp) or (arp or (ip and udp and src net 10.0.0.0/8)))
selftest ok
src/testlog/program.snabbnfv.neutron2snabb.neutron2snabb_schema:
selftest: neutron2snabb_schema
selftest: ok
src/testlog/program.snabbnfv.neutron2snabb.selftest.sh:
selftest: neutron2snabb/selftest.sh
Parsing neutron db tables
PortID: 9745ff46-986f-4f74-bc37-a35f481c0b9b
BindingID nil has vif_type vhostuser
vif_details has hostname cdn1 (we want cdn1 )
admin_state_ip is 1
Adding zone port ' port2 ' to list
PortID: 523276c7-73e3-4154-8b67-9c7199bdbb8c
BindingID nil has vif_type vhostuser
vif_details has hostname cdn1 (we want cdn1 )
admin_state_ip is 1
Adding zone port ' port0 ' to list
Created /tmp/snabbtest/port0
Created /tmp/snabbtest/port2
2a3
> rx_police = 3,
4d4
< tx_police = 1,
19c19
< rx_police = 3,
---
> tx_police = 1,
EXITCODE: 1
src/testlog/program.snabbnfv.nfvconfig:
selftest: lib.nfv.config
testing: program/snabbnfv/test_fixtures/nfvconfig/switch_nic/x
engine: start app id0_Virtio
engine: start app id0_NIC
testing: program/snabbnfv/test_fixtures/nfvconfig/switch_filter/x
engine: start app id0_Filter_in
load: time: 0.29s fps: 0 fpGbps: 0.000 fpb: 0 bpp: - sleep: 100 us
testing: program/snabbnfv/test_fixtures/nfvconfig/switch_qos/x
engine: stop app id0_Filter_in
load: time: 0.25s fps: 0 fpGbps: 0.000 fpb: 0 bpp: - sleep: 100 us
testing: program/snabbnfv/test_fixtures/nfvconfig/switch_tunnel/x
engine: start app id0_Tunnel
engine: start app id0_ND
Oct 26 2016 23:36:17 nd_light: Sending neighbor solicitation for next-hop 2::2
load: time: 0.26s fps: 3 fpGbps: 0.000 fpb: 0 bpp: 86 sleep: 100 us
testing: program/snabbnfv/test_fixtures/nfvconfig/scale_up/y
engine: stop app id0_Tunnel
engine: stop app id0_ND
engine: start app id1_NIC
engine: start app id1_Virtio
load: time: 0.26s fps: 7 fpGbps: 0.000 fpb: 0 bpp: 86 sleep: 100 us
testing: program/snabbnfv/test_fixtures/nfvconfig/scale_up/x
engine: start app id63_Virtio
engine: start app id40_Virtio
engine: start app id10_NIC
engine: start app id61_Virtio
engine: start app id6_NIC
engine: start app id45_NIC
engine: start app id34_NIC
engine: start app id35_NIC
engine: start app id60_Virtio
engine: start app id59_Virtio
engine: start app id58_Virtio
engine: start app id57_Virtio
engine: start app id3_Virtio
engine: start app id47_Virtio
engine: start app id54_Virtio
engine: start app id42_NIC
engine: start app id38_NIC
engine: start app id24_NIC
engine: start app id18_Virtio
engine: start app id52_Virtio
engine: start app id51_NIC
engine: start app id51_Virtio
engine: start app id50_Virtio
engine: start app id49_Virtio
engine: start app id48_Virtio
engine: start app id55_Virtio
engine: start app id45_Virtio
engine: start app id28_NIC
engine: start app id46_Virtio
engine: start app id15_Virtio
engine: start app id41_NIC
engine: start app id35_Virtio
engine: start app id25_NIC
engine: start app id44_Virtio
engine: start app id43_Virtio
engine: start app id42_Virtio
engine: start app id32_NIC
engine: start app id18_NIC
engine: start app id33_Virtio
engine: start app id62_Virtio
engine: start app id12_Virtio
engine: start app id39_Virtio
engine: start app id32_Virtio
engine: start app id22_Virtio
engine: start app id5_Virtio
engine: start app id53_NIC
engine: start app id60_NIC
engine: start app id8_NIC
engine: start app id31_NIC
engine: start app id46_NIC
engine: start app id37_Virtio
engine: start app id27_Virtio
engine: start app id17_Virtio
engine: start app id30_Virtio
engine: start app id50_NIC
engine: start app id10_Virtio
engine: start app id53_Virtio
engine: start app id56_NIC
engine: start app id12_NIC
engine: start app id43_NIC
engine: start app id13_Virtio
engine: start app id23_Virtio
engine: start app id14_Virtio
engine: start app id34_Virtio
engine: start app id7_Virtio
engine: start app id6_Virtio
engine: start app id40_NIC
engine: start app id63_NIC
engine: start app id21_Virtio
engine: start app id31_Virtio
engine: start app id62_NIC
engine: start app id11_Virtio
engine: start app id59_NIC
engine: start app id13_NIC
engine: start app id41_Virtio
engine: start app id57_NIC
engine: start app id29_Virtio
engine: start app id9_Virtio
engine: start app id8_Virtio
engine: start app id28_Virtio
engine: start app id26_Virtio
engine: start app id49_NIC
engine: start app id4_Virtio
engine: start app id4_NIC
engine: start app id2_Virtio
engine: start app id36_NIC
engine: start app id52_NIC
engine: start app id25_Virtio
engine: start app id47_NIC
engine: start app id58_NIC
engine: start app id24_Virtio
engine: start app id20_Virtio
engine: start app id19_Virtio
engine: start app id29_NIC
engine: start app id37_NIC
engine: start app id54_NIC
engine: start app id36_Virtio
engine: start app id26_NIC
engine: start app id16_Virtio
engine: start app id39_NIC
engine: start app id38_Virtio
engine: start app id48_NIC
engine: start app id56_Virtio
engine: start app id9_NIC
engine: start app id21_NIC
engine: start app id61_NIC
engine: start app id30_NIC
engine: start app id44_NIC
engine: start app id19_NIC
engine: start app id7_NIC
engine: start app id3_NIC
engine: start app id22_NIC
engine: start app id23_NIC
engine: start app id14_NIC
engine: start app id2_NIC
engine: start app id5_NIC
engine: start app id20_NIC
engine: start app id17_NIC
engine: start app id11_NIC
engine: start app id55_NIC
engine: start app id33_NIC
engine: start app id15_NIC
engine: start app id16_NIC
engine: start app id27_NIC
load: time: 0.08s fps: 0 fpGbps: 0.000 fpb: 0 bpp: - sleep: 100 us
testing: program/snabbnfv/test_fixtures/nfvconfig/scale_change/x
load: time: 0.77s fps: 0 fpGbps: 0.000 fpb: 0 bpp: - sleep: 100 us
testing: program/snabbnfv/test_fixtures/nfvconfig/scale_change/y
engine: reconfig app id0_NIC
engine: reconfig app id1_NIC
engine: reconfig app id3_NIC
engine: reconfig app id2_NIC
load: time: 0.26s fps: 0 fpGbps: 0.000 fpb: 0 bpp: - sleep: 100 us
src/testlog/program.snabbnfv.selftest.sh:
Defaulting to SNABB_TELNET0=5000
Defaulting to SNABB_TELNET1=5001
Defaulting to SNABB_IPERF_BENCH_CONF=program/snabbnfv/test_fixtures/nfvconfig/test_functions/same_vlan.ports
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/other_vlan.ports
Defaulting to MAC=52:54:00:00:00:
Defaulting to IP=fe80::5054:ff:fe00:
Defaulting to GUEST_MEM=512
Defaulting to HUGETLBFS=/hugetlbfs
Defaulting to QUEUES=1
Defaulting to QEMU=/root/.test_env/qemu/obj/x86_64-softmmu/qemu-system-x86_64
Waiting for VM listening on telnet port 5000 to get ready... [OK]
Waiting for VM listening on telnet port 5001 to get ready... [OK]
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/same_vlan.ports
1 packets transmitted, 1 received, 0% packet loss, time 0ms
PING succeded.
[ 3] 0.0-10.0 sec 6.84 GBytes 5.87 Gbits/sec
IPERF succeded.
1 packets transmitted, 1 received, 0% packet loss, time 0ms
JUMBOPING succeded.
[ 3] 0.0-10.0 sec 10.3 GBytes 8.82 Gbits/sec
IPERF succeded.
tx-checksumming: on
TX-CHECKSUMMING succeded.
tx-checksumming: on
TX-CHECKSUMMING succeded.
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/tx_rate_limit.ports
1 packets transmitted, 1 received, 0% packet loss, time 0ms
PING succeded.
IPERF (RATE_LIMITED) succeded.
IPERF rate is 813 Mbits/sec (900 Mbits/sec allowed).
RATE_LIMITED succeded.
1 packets transmitted, 1 received, 0% packet loss, time 0ms
JUMBOPING succeded.
IPERF (RATE_LIMITED) succeded.
IPERF rate is 813 Mbits/sec (900 Mbits/sec allowed).
RATE_LIMITED succeded.
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/rx_rate_limit.ports
1 packets transmitted, 1 received, 0% packet loss, time 0ms
PING succeded.
IPERF (RATE_LIMITED) succeded.
IPERF rate is 813 Mbits/sec (1200 Mbits/sec allowed).
RATE_LIMITED succeded.
1 packets transmitted, 1 received, 0% packet loss, time 0ms
JUMBOPING succeded.
IPERF (RATE_LIMITED) succeded.
IPERF rate is 813 Mbits/sec (1200 Mbits/sec allowed).
RATE_LIMITED succeded.
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/tunnel.ports
Oct 26 2016 23:41:40 nd_light: Resolved next-hop fe80::5054:ff:fe00:0 to 52:54:00:00:00:00
Oct 26 2016 23:41:40 nd_light: Resolved next-hop fe80::5054:ff:fe00:1 to 52:54:00:00:00:01
ND succeded.
1 packets transmitted, 1 received, 0% packet loss, time 0ms
PING succeded.
[ 3] 0.0-10.0 sec 5.91 GBytes 5.08 Gbits/sec
IPERF succeded.
1 packets transmitted, 1 received, 0% packet loss, time 0ms
JUMBOPING succeded.
[ 3] 0.0-10.0 sec 6.19 GBytes 5.31 Gbits/sec
IPERF succeded.
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/filter.ports
1 packets transmitted, 1 received, 0% packet loss, time 0ms
PING succeded.
Connection to fe80::5054:ff:fe00:0001%eth0 12345 port [tcp/*] succeeded!
PORTPROBE succeded.
Trying ::1...
Connected to localhost.
Escape character is '^]'.
nc -w 1 -q 1 -v fe80::5054:ff:fe00:0001%eth0 12346
nc: connect to fe80::5054:ff:fe00:0001%eth0 port 12346 (tcp) timed out: Operation now in progress
root@fe00:~#
root@fe00:~# Connection closed by foreign host.
FILTER succeded.
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/stateful-filter.ports
1 packets transmitted, 1 received, 0% packet loss, time 0ms
PING succeded.
Connection to fe80::5054:ff:fe00:0001%eth0 12345 port [tcp/*] succeeded!
PORTPROBE succeded.
Trying ::1...
Connected to localhost.
Escape character is '^]'.
nc -w 1 -q 1 -v fe80::5054:ff:fe00:0001%eth0 12348
nc: connect to fe80::5054:ff:fe00:0001%eth0 port 12348 (tcp) timed out: Operation now in progress
root@fe00:~#
root@fe00:~# Connection closed by foreign host.
FILTER succeded.
Trying ::1...
Connected to localhost.
Escape character is '^]'.
nc -w 1 -q 1 -v fe80::5054:ff:fe00:0000%eth0 12340
nc: connect to fe80::5054:ff:fe00:0000%eth0 port 12340 (tcp) timed out: Operation now in progress
root@fe00:~#
root@fe00:~# Connection closed by foreign host.
FILTER succeded.
USING program/snabbnfv/test_fixtures/nfvconfig/test_functions/crypto.ports
1 packets transmitted, 1 received, 0% packet loss, time 0ms
PING succeded.
[ 3] 0.0-10.0 sec 1.33 GBytes 1.14 Gbits/sec
IPERF succeded.
1 packets transmitted, 1 received, 0% packet loss, time 0ms
JUMBOPING succeded.
[ 3] 0.0-10.0 sec 1.77 GBytes 1.52 Gbits/sec
IPERF succeded.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment