Skip to content

Conversation

@SirTyson
Copy link
Contributor

@SirTyson SirTyson commented Aug 29, 2025

Overview

This PR introduces a network infrastructure benchmark that measures network performance of supercluster without actually running stellar-core. It installs an identical network topology, with the same peer to peer overlay, ingress, DNS, and artificial delay, but instead of running stellar-core containers, it runs iperf3 containers for a network stress test. All nodes flood as much bandwidth as possible to all their peers at the same time bidirectionally to simulate p2p networking. We then report bandwidth and latency as reported by iperf3. This test can be run standalone, or immediately before any ssc mission. We might want to add this to our MaxTPS flow for example to have a better idea of cluster variance run to run.

Usage

--benchmark-infra true flag will enable the network test prior to the superclsuter mission. if --benchmark-only is set, the test will exit after running just the network load test. --benchmark-duration-seconds determines how long to generate load for the stress tests and defaults to 30 seconds.

Example output:

Infrastructure Configuration:
- Nodes: 7
- Average peers per node: 6.0
- Total connections: 42
- Test duration: 30 seconds
- Network delays: enabled

Aggregate Performance Metrics:
- Connection failures: 0
- Average total throughput per node: 818.5 Mbps send, 818.5 Mbps recv
- Average per-peer throughput: 136.4 Mbps send, 136.4 Mbps recv

RTT Latency Statistics (across all nodes):
- Minimum: 12.34 ms
- Mean: 131.99 ms
- Maximum: 182.24 ms

Individual Node Results:
  ssc-2203z-138f5d-sts-lo-0: 101.5 Mbps send, 89.7 Mbps recv (16.9/15.0 Mbps per peer), 235.03 ms mean RTT, 2 retransmits
  ssc-2203z-138f5d-sts-node-0-0: 366.4 Mbps send, 365.3 Mbps recv (61.1/60.9 Mbps per peer), 164.57 ms mean RTT, 0 retransmits
  ssc-2203z-138f5d-sts-node-1-0: 659.5 Mbps send, 660.4 Mbps recv (109.9/110.1 Mbps per peer), 122.68 ms mean RTT, 37 retransmits
  ...

Design

Bidirectional Performance Testing via iper3

  • Uses iperf3's --bidir flag to run simultaneous bidirectional tests between node pairs
  • All pods submit maximum load to all peers at the same time to stress network
  • Each connection has it's own server/client

Topology Mirroring

  • Creates one benchmark pod for each stellar-core node
  • Maintains identical peer connections as defined in the stellar-core configuration
  • Applies same network delays as stellar-core pods
  • Uses same ingress and DNS service as stellar-core pods

Pod Structure

Each benchmark pod contains three containers:

  1. Server Container

    • Runs multiple iperf3 server instances (one per incoming peer connection)
    • Each server listens on port 5201 + source_node_index, where each node has a unique index
  2. Client Container

    • Runs client for all peers running a server
    • Attempts to connect to server it's own node id as port
  3. Network Delay Container (optional)

    • Applies geographic latency simulation via tc rules
    • Mirrors the exact delay configuration from stellar-core pods
    • Transforms DNS names from stellar-core pattern to benchmark pod pattern

Since there is some overhead in running a server vs. a client, we use a hash of
each pod name in a given connection to randomly decide who will be the client vs.
server for that particular connection.

Results Parsing

  • Python script (parse_benchmark_results.py) processes raw iperf3 JSON output
  • Aggregates metrics across all nodes and connections
  • Calculates statistics including:
    • Per-node and per-peer throughput (send/receive)
    • RTT latency (min/mean/max)
    • TCP retransmit counts
    • Connection failure detection
  • Outputs both human-readable summary and writes more detailed stats to JSON

Observations from EKS cluster:

Here are a few samples I collected from a few different topologies, all with 5 minute runs each:

Small, 7 node topology: theoretical-max-tps.json

Test ID: benchmark-20250829-220025
Timestamp: 2025-08-29 22:00:29

Infrastructure Configuration:
- Nodes: 7
- Average peers per node: 6.0
- Total connections: 42
- Test duration: 300 seconds
- Network delays: enabled

Aggregate Performance Metrics:
- Connection failures: 0
- Average total throughput per node: 1326.8 Mbps send, 1326.8 Mbps recv
- Average per-peer throughput: 221.1 Mbps send, 221.1 Mbps recv

RTT Latency Statistics (across all nodes):
- Minimum: 12.36 ms
- Mean: 127.90 ms
- Maximum: 189.08 ms

Individual Node Results:
  ssc-2155z-ca4b08-sts-lo-0: 195.4 Mbps send, 195.0 Mbps recv (32.6/32.5 Mbps per peer), 124.95 ms mean RTT, 0 retransmits
  ssc-2155z-ca4b08-sts-node-0-0: 2288.0 Mbps send, 2331.4 Mbps recv (381.3/388.6 Mbps per peer), 82.67 ms mean RTT, 0 retransmits
  ssc-2155z-ca4b08-sts-node-1-0: 150.2 Mbps send, 150.2 Mbps recv (25.0/25.0 Mbps per peer), 162.38 ms mean RTT, 0 retransmits
  ssc-2155z-ca4b08-sts-node-2-0: 133.0 Mbps send, 133.3 Mbps recv (22.2/22.2 Mbps per peer), 182.92 ms mean RTT, 0 retransmits
  ssc-2155z-ca4b08-sts-node-3-0: 3981.8 Mbps send, 3908.1 Mbps recv (663.6/651.3 Mbps per peer), 62.55 ms mean RTT, 116 retransmits
  ssc-2155z-ca4b08-sts-pn-0: 2389.4 Mbps send, 2419.9 Mbps recv (398.2/403.3 Mbps per peer), 117.20 ms mean RTT, 209 retransmits
  ssc-2155z-ca4b08-sts-sdf-0: 149.8 Mbps send, 149.9 Mbps recv (25.0/25.0 Mbps per peer), 162.62 ms mean RTT, 3 retransmits

Default, 22 node topology:

Test ID: benchmark-20250829-221505
Timestamp: 2025-08-29 22:15:17

Infrastructure Configuration:
- Nodes: 23
- Average peers per node: 22.0
- Total connections: 506
- Test duration: 300 seconds
- Network delays: enabled

Aggregate Performance Metrics:
- Connection failures: 0
- Average total throughput per node: 706.6 Mbps send, 706.6 Mbps recv
- Average per-peer throughput: 32.1 Mbps send, 32.1 Mbps recv

RTT Latency Statistics (across all nodes):
- Minimum: 6.55 ms
- Mean: 114.80 ms
- Maximum: 235.05 ms

Individual Node Results:
  ssc-2209z-ebe1ea-sts-bd-0: 3664.7 Mbps send, 2449.5 Mbps recv (166.6/111.3 Mbps per peer), 140.58 ms mean RTT, 76 retransmits
  ssc-2209z-ebe1ea-sts-bd-1: 6322.6 Mbps send, 4089.7 Mbps recv (287.4/185.9 Mbps per peer), 91.09 ms mean RTT, 45 retransmits
  ssc-2209z-ebe1ea-sts-bd-2: 234.9 Mbps send, 255.7 Mbps recv (10.7/11.6 Mbps per peer), 171.12 ms mean RTT, 42 retransmits
  ssc-2209z-ebe1ea-sts-cq-0: 543.7 Mbps send, 1032.1 Mbps recv (24.7/46.9 Mbps per peer), 15.90 ms mean RTT, 312 retransmits
  ssc-2209z-ebe1ea-sts-cq-1: 363.1 Mbps send, 592.3 Mbps recv (16.5/26.9 Mbps per peer), 36.14 ms mean RTT, 121 retransmits
  ssc-2209z-ebe1ea-sts-cq-2: 104.0 Mbps send, 123.2 Mbps recv (4.7/5.6 Mbps per peer), 189.21 ms mean RTT, 42 retransmits
  ssc-2209z-ebe1ea-sts-kb-0: 143.7 Mbps send, 174.3 Mbps recv (6.5/7.9 Mbps per peer), 128.64 ms mean RTT, 69 retransmits
  ssc-2209z-ebe1ea-sts-kb-1: 789.1 Mbps send, 1384.6 Mbps recv (35.9/62.9 Mbps per peer), 9.38 ms mean RTT, 597 retransmits
  ssc-2209z-ebe1ea-sts-kb-2: 113.9 Mbps send, 141.1 Mbps recv (5.2/6.4 Mbps per peer), 163.05 ms mean RTT, 53 retransmits
  ssc-2209z-ebe1ea-sts-lo-0: 175.7 Mbps send, 181.5 Mbps recv (8.0/8.2 Mbps per peer), 124.91 ms mean RTT, 44 retransmits
  ssc-2209z-ebe1ea-sts-lo-1: 116.8 Mbps send, 147.0 Mbps recv (5.3/6.7 Mbps per peer), 159.73 ms mean RTT, 59 retransmits
  ssc-2209z-ebe1ea-sts-lo-2: 555.0 Mbps send, 1300.8 Mbps recv (25.2/59.1 Mbps per peer), 16.00 ms mean RTT, 309 retransmits
  ssc-2209z-ebe1ea-sts-lo-3: 293.3 Mbps send, 614.1 Mbps recv (13.3/27.9 Mbps per peer), 38.65 ms mean RTT, 102 retransmits
  ssc-2209z-ebe1ea-sts-lo-4: 89.4 Mbps send, 108.0 Mbps recv (4.1/4.9 Mbps per peer), 218.20 ms mean RTT, 69 retransmits
  ssc-2209z-ebe1ea-sts-sdf-0: 140.8 Mbps send, 183.3 Mbps recv (6.4/8.3 Mbps per peer), 130.97 ms mean RTT, 73 retransmits
  ssc-2209z-ebe1ea-sts-sdf-1: 152.8 Mbps send, 177.1 Mbps recv (6.9/8.1 Mbps per peer), 132.72 ms mean RTT, 48 retransmits
  ssc-2209z-ebe1ea-sts-sdf-2: 158.3 Mbps send, 185.6 Mbps recv (7.2/8.4 Mbps per peer), 131.62 ms mean RTT, 64 retransmits
  ssc-2209z-ebe1ea-sts-sp-0: 133.1 Mbps send, 143.8 Mbps recv (6.0/6.5 Mbps per peer), 147.18 ms mean RTT, 31 retransmits
  ssc-2209z-ebe1ea-sts-sp-1: 786.0 Mbps send, 1219.1 Mbps recv (35.7/55.4 Mbps per peer), 9.28 ms mean RTT, 542 retransmits
  ssc-2209z-ebe1ea-sts-sp-2: 94.1 Mbps send, 106.9 Mbps recv (4.3/4.9 Mbps per peer), 215.48 ms mean RTT, 39 retransmits
  ssc-2209z-ebe1ea-sts-wx-0: 129.1 Mbps send, 149.3 Mbps recv (5.9/6.8 Mbps per peer), 145.90 ms mean RTT, 55 retransmits
  ssc-2209z-ebe1ea-sts-wx-1: 1058.4 Mbps send, 1389.4 Mbps recv (48.1/63.2 Mbps per peer), 10.14 ms mean RTT, 447 retransmits
  ssc-2209z-ebe1ea-sts-wx-2: 89.2 Mbps send, 103.4 Mbps recv (4.1/4.7 Mbps per peer), 214.52 ms mean RTT, 62 retransmits

Larger, 100 node topology generated-overlay-topology-2.json:

Test ID: benchmark-20250829-222154
Timestamp: 2025-08-29 22:22:47

Infrastructure Configuration:
- Nodes: 100
- Average peers per node: 9.0
- Total connections: 898
- Test duration: 300 seconds
- Network delays: enabled

Aggregate Performance Metrics:
- Connection failures: 0
- Average total throughput per node: 1323.8 Mbps send, 1323.8 Mbps recv
- Average per-peer throughput: 155.9 Mbps send, 155.8 Mbps recv

RTT Latency Statistics (across all nodes):
- Minimum: 0.02 ms
- Mean: 122.33 ms
- Maximum: 383.83 ms

Individual Node Results:
  ssc-2216z-f2eecd-sts-bd-0: 284.1 Mbps send, 287.8 Mbps recv (31.6/32.0 Mbps per peer), 178.45 ms mean RTT, 4 retransmits
  ssc-2216z-f2eecd-sts-bd-1: 2915.4 Mbps send, 2408.7 Mbps recv (971.8/802.9 Mbps per peer), 4.03 ms mean RTT, 606 retransmits
  ssc-2216z-f2eecd-sts-bd-2: 132.9 Mbps send, 161.8 Mbps recv (19.0/23.1 Mbps per peer), 113.19 ms mean RTT, 118 retransmits
  ssc-2216z-f2eecd-sts-cq-0: 125.4 Mbps send, 97.0 Mbps recv (12.5/9.7 Mbps per peer), 183.87 ms mean RTT, 88 retransmits
  ssc-2216z-f2eecd-sts-cq-1: 173.3 Mbps send, 192.0 Mbps recv (24.8/27.4 Mbps per peer), 113.52 ms mean RTT, 54 retransmits
  ssc-2216z-f2eecd-sts-cq-2: 214.3 Mbps send, 195.5 Mbps recv (35.7/32.6 Mbps per peer), 75.15 ms mean RTT, 106 retransmits
  ssc-2216z-f2eecd-sts-lo-0: 202.0 Mbps send, 202.2 Mbps recv (67.3/67.4 Mbps per peer), 91.90 ms mean RTT, 17 retransmits
  ssc-2216z-f2eecd-sts-lo-1: 1149.8 Mbps send, 1354.5 Mbps recv (115.0/135.5 Mbps per peer), 87.40 ms mean RTT, 71 retransmits
  ssc-2216z-f2eecd-sts-lo-2: 130.9 Mbps send, 123.9 Mbps recv (21.8/20.6 Mbps per peer), 119.96 ms mean RTT, 46 retransmits
  ssc-2216z-f2eecd-sts-lo-3: 169.2 Mbps send, 168.8 Mbps recv (33.8/33.8 Mbps per peer), 126.65 ms mean RTT, 8 retransmits
  ssc-2216z-f2eecd-sts-lo-4: 1547.8 Mbps send, 1705.3 Mbps recv (386.9/426.3 Mbps per peer), 35.91 ms mean RTT, 150 retransmits
  ssc-2216z-f2eecd-sts-node-0-0: 2897.9 Mbps send, 3385.5 Mbps recv (414.0/483.6 Mbps per peer), 80.58 ms mean RTT, 26 retransmits
  ssc-2216z-f2eecd-sts-node-1-0: 2587.1 Mbps send, 2599.7 Mbps recv (199.0/200.0 Mbps per peer), 112.59 ms mean RTT, 26 retransmits
  ssc-2216z-f2eecd-sts-node-10-0: 1241.1 Mbps send, 1231.5 Mbps recv (103.4/102.6 Mbps per peer), 119.64 ms mean RTT, 36 retransmits
  ssc-2216z-f2eecd-sts-node-11-0: 148.5 Mbps send, 130.9 Mbps recv (14.8/13.1 Mbps per peer), 114.16 ms mean RTT, 42 retransmits
  ssc-2216z-f2eecd-sts-node-12-0: 170.2 Mbps send, 177.5 Mbps recv (24.3/25.4 Mbps per peer), 76.75 ms mean RTT, 48 retransmits
  ssc-2216z-f2eecd-sts-node-13-0: 135.2 Mbps send, 122.2 Mbps recv (19.3/17.5 Mbps per peer), 93.27 ms mean RTT, 42 retransmits
  ssc-2216z-f2eecd-sts-node-14-0: 136.7 Mbps send, 134.2 Mbps recv (17.1/16.8 Mbps per peer), 158.04 ms mean RTT, 9 retransmits
  ssc-2216z-f2eecd-sts-node-15-0: 136.0 Mbps send, 126.6 Mbps recv (12.4/11.5 Mbps per peer), 105.56 ms mean RTT, 8 retransmits
  ssc-2216z-f2eecd-sts-node-16-0: 243.9 Mbps send, 254.3 Mbps recv (17.4/18.2 Mbps per peer), 168.79 ms mean RTT, 39 retransmits
  ssc-2216z-f2eecd-sts-node-17-0: 102.5 Mbps send, 95.4 Mbps recv (11.4/10.6 Mbps per peer), 149.20 ms mean RTT, 88 retransmits
  ssc-2216z-f2eecd-sts-node-18-0: 179.0 Mbps send, 187.2 Mbps recv (29.8/31.2 Mbps per peer), 67.64 ms mean RTT, 42 retransmits
  ssc-2216z-f2eecd-sts-node-19-0: 160.6 Mbps send, 169.4 Mbps recv (22.9/24.2 Mbps per peer), 107.70 ms mean RTT, 5 retransmits
  ssc-2216z-f2eecd-sts-node-2-0: 2521.3 Mbps send, 2097.7 Mbps recv (252.1/209.8 Mbps per peer), 65.67 ms mean RTT, 53 retransmits
  ssc-2216z-f2eecd-sts-node-20-0: 369.0 Mbps send, 391.3 Mbps recv (26.4/27.9 Mbps per peer), 80.67 ms mean RTT, 24 retransmits
  ssc-2216z-f2eecd-sts-node-21-0: 163.2 Mbps send, 161.2 Mbps recv (18.1/17.9 Mbps per peer), 78.92 ms mean RTT, 13 retransmits
  ssc-2216z-f2eecd-sts-node-22-0: 277.2 Mbps send, 277.1 Mbps recv (30.8/30.8 Mbps per peer), 191.86 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-23-0: 3822.5 Mbps send, 3807.0 Mbps recv (347.5/346.1 Mbps per peer), 89.98 ms mean RTT, 66 retransmits
  ssc-2216z-f2eecd-sts-node-24-0: 227.6 Mbps send, 229.9 Mbps recv (15.2/15.3 Mbps per peer), 172.15 ms mean RTT, 38 retransmits
  ssc-2216z-f2eecd-sts-node-25-0: 1862.9 Mbps send, 1860.0 Mbps recv (155.2/155.0 Mbps per peer), 6.88 ms mean RTT, 18 retransmits
  ssc-2216z-f2eecd-sts-node-26-0: 510.1 Mbps send, 501.3 Mbps recv (51.0/50.1 Mbps per peer), 120.42 ms mean RTT, 50 retransmits
  ssc-2216z-f2eecd-sts-node-27-0: 387.2 Mbps send, 466.9 Mbps recv (38.7/46.7 Mbps per peer), 179.54 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-28-0: 8096.7 Mbps send, 8370.3 Mbps recv (736.1/760.9 Mbps per peer), 5.28 ms mean RTT, 64 retransmits
  ssc-2216z-f2eecd-sts-node-29-0: 5195.8 Mbps send, 4904.8 Mbps recv (399.7/377.3 Mbps per peer), 41.35 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-3-0: 4674.5 Mbps send, 4965.5 Mbps recv (584.3/620.7 Mbps per peer), 2.58 ms mean RTT, 1189 retransmits
  ssc-2216z-f2eecd-sts-node-30-0: 254.2 Mbps send, 266.6 Mbps recv (12.7/13.3 Mbps per peer), 172.90 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-31-0: 177.0 Mbps send, 174.8 Mbps recv (19.7/19.4 Mbps per peer), 73.43 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-32-0: 96.2 Mbps send, 96.1 Mbps recv (8.0/8.0 Mbps per peer), 253.25 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-33-0: 102.6 Mbps send, 89.8 Mbps recv (11.4/10.0 Mbps per peer), 122.83 ms mean RTT, 3 retransmits
  ssc-2216z-f2eecd-sts-node-34-0: 5214.8 Mbps send, 4972.9 Mbps recv (869.1/828.8 Mbps per peer), 97.69 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-35-0: 55.3 Mbps send, 65.5 Mbps recv (4.3/5.0 Mbps per peer), 290.36 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-36-0: 6991.1 Mbps send, 7124.4 Mbps recv (582.6/593.7 Mbps per peer), 109.36 ms mean RTT, 89 retransmits
  ssc-2216z-f2eecd-sts-node-37-0: 1914.7 Mbps send, 1899.8 Mbps recv (273.5/271.4 Mbps per peer), 148.73 ms mean RTT, 7 retransmits
  ssc-2216z-f2eecd-sts-node-38-0: 8047.4 Mbps send, 6550.1 Mbps recv (731.6/595.5 Mbps per peer), 89.94 ms mean RTT, 68 retransmits
  ssc-2216z-f2eecd-sts-node-39-0: 1154.0 Mbps send, 1147.7 Mbps recv (144.2/143.5 Mbps per peer), 13.91 ms mean RTT, 210 retransmits
  ssc-2216z-f2eecd-sts-node-4-0: 287.4 Mbps send, 276.2 Mbps recv (57.5/55.2 Mbps per peer), 152.90 ms mean RTT, 15 retransmits
  ssc-2216z-f2eecd-sts-node-40-0: 459.9 Mbps send, 480.9 Mbps recv (30.7/32.1 Mbps per peer), 179.45 ms mean RTT, 7 retransmits
  ssc-2216z-f2eecd-sts-node-41-0: 1312.2 Mbps send, 1310.5 Mbps recv (164.0/163.8 Mbps per peer), 13.99 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-42-0: 168.5 Mbps send, 168.3 Mbps recv (21.1/21.0 Mbps per peer), 107.30 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-43-0: 2307.2 Mbps send, 2444.0 Mbps recv (329.6/349.1 Mbps per peer), 9.42 ms mean RTT, 943 retransmits
  ssc-2216z-f2eecd-sts-node-44-0: 2755.8 Mbps send, 2756.1 Mbps recv (306.2/306.2 Mbps per peer), 125.58 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-45-0: 267.7 Mbps send, 268.0 Mbps recv (44.6/44.7 Mbps per peer), 186.53 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-46-0: 605.1 Mbps send, 622.2 Mbps recv (67.2/69.1 Mbps per peer), 185.93 ms mean RTT, 2 retransmits
  ssc-2216z-f2eecd-sts-node-47-0: 2664.3 Mbps send, 2642.1 Mbps recv (266.4/264.2 Mbps per peer), 25.12 ms mean RTT, 124 retransmits
  ssc-2216z-f2eecd-sts-node-48-0: 818.5 Mbps send, 1067.5 Mbps recv (81.8/106.8 Mbps per peer), 96.08 ms mean RTT, 155 retransmits
  ssc-2216z-f2eecd-sts-node-49-0: 610.4 Mbps send, 383.6 Mbps recv (55.5/34.9 Mbps per peer), 167.48 ms mean RTT, 2 retransmits
  ssc-2216z-f2eecd-sts-node-5-0: 168.7 Mbps send, 171.9 Mbps recv (15.3/15.6 Mbps per peer), 106.84 ms mean RTT, 2 retransmits
  ssc-2216z-f2eecd-sts-node-50-0: 130.0 Mbps send, 141.2 Mbps recv (10.0/10.9 Mbps per peer), 171.86 ms mean RTT, 4 retransmits
  ssc-2216z-f2eecd-sts-node-51-0: 166.2 Mbps send, 166.2 Mbps recv (15.1/15.1 Mbps per peer), 293.88 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-52-0: 319.9 Mbps send, 324.2 Mbps recv (16.8/17.1 Mbps per peer), 57.04 ms mean RTT, 9 retransmits
  ssc-2216z-f2eecd-sts-node-53-0: 131.8 Mbps send, 138.3 Mbps recv (16.5/17.3 Mbps per peer), 171.81 ms mean RTT, 27 retransmits
  ssc-2216z-f2eecd-sts-node-54-0: 126.6 Mbps send, 119.3 Mbps recv (12.7/11.9 Mbps per peer), 190.76 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-55-0: 1454.8 Mbps send, 1461.5 Mbps recv (161.6/162.4 Mbps per peer), 16.49 ms mean RTT, 62 retransmits
  ssc-2216z-f2eecd-sts-node-56-0: 127.4 Mbps send, 127.5 Mbps recv (21.2/21.2 Mbps per peer), 191.05 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-57-0: 64.4 Mbps send, 64.4 Mbps recv (6.4/6.4 Mbps per peer), 377.31 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-58-0: 2057.4 Mbps send, 2060.3 Mbps recv (342.9/343.4 Mbps per peer), 166.74 ms mean RTT, 2 retransmits
  ssc-2216z-f2eecd-sts-node-59-0: 4184.6 Mbps send, 3924.9 Mbps recv (418.5/392.5 Mbps per peer), 12.27 ms mean RTT, 88 retransmits
  ssc-2216z-f2eecd-sts-node-6-0: 422.0 Mbps send, 329.7 Mbps recv (60.3/47.1 Mbps per peer), 116.04 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-60-0: 515.9 Mbps send, 516.2 Mbps recv (129.0/129.0 Mbps per peer), 152.99 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-61-0: 138.1 Mbps send, 138.2 Mbps recv (13.8/13.8 Mbps per peer), 176.59 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-62-0: 116.8 Mbps send, 132.5 Mbps recv (16.7/18.9 Mbps per peer), 184.27 ms mean RTT, 8 retransmits
  ssc-2216z-f2eecd-sts-node-63-0: 195.2 Mbps send, 195.2 Mbps recv (15.0/15.0 Mbps per peer), 125.00 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-64-0: 223.5 Mbps send, 219.6 Mbps recv (37.2/36.6 Mbps per peer), 224.49 ms mean RTT, 2 retransmits
  ssc-2216z-f2eecd-sts-node-65-0: 131.7 Mbps send, 141.7 Mbps recv (22.0/23.6 Mbps per peer), 171.57 ms mean RTT, 5 retransmits
  ssc-2216z-f2eecd-sts-node-66-0: 1415.5 Mbps send, 1611.8 Mbps recv (202.2/230.3 Mbps per peer), 80.18 ms mean RTT, 28 retransmits
  ssc-2216z-f2eecd-sts-node-67-0: 2332.4 Mbps send, 2334.4 Mbps recv (333.2/333.5 Mbps per peer), 10.30 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-68-0: 217.0 Mbps send, 199.7 Mbps recv (21.7/20.0 Mbps per peer), 111.75 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-node-69-0: 82.9 Mbps send, 89.3 Mbps recv (6.4/6.9 Mbps per peer), 271.34 ms mean RTT, 8 retransmits
  ssc-2216z-f2eecd-sts-node-7-0: 90.6 Mbps send, 90.7 Mbps recv (12.9/13.0 Mbps per peer), 268.63 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-70-0: 5459.4 Mbps send, 5909.8 Mbps recv (496.3/537.3 Mbps per peer), 58.96 ms mean RTT, 19 retransmits
  ssc-2216z-f2eecd-sts-node-71-0: 2443.9 Mbps send, 2428.7 Mbps recv (349.1/347.0 Mbps per peer), 158.19 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-72-0: 98.1 Mbps send, 102.6 Mbps recv (8.2/8.6 Mbps per peer), 237.64 ms mean RTT, 4 retransmits
  ssc-2216z-f2eecd-sts-node-73-0: 1285.0 Mbps send, 1115.1 Mbps recv (160.6/139.4 Mbps per peer), 17.99 ms mean RTT, 11 retransmits
  ssc-2216z-f2eecd-sts-node-74-0: 3493.9 Mbps send, 3379.9 Mbps recv (349.4/338.0 Mbps per peer), 112.78 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-75-0: 1615.3 Mbps send, 1838.6 Mbps recv (201.9/229.8 Mbps per peer), 99.13 ms mean RTT, 581 retransmits
  ssc-2216z-f2eecd-sts-node-76-0: 6095.6 Mbps send, 7419.9 Mbps recv (609.6/742.0 Mbps per peer), 31.48 ms mean RTT, 356 retransmits
  ssc-2216z-f2eecd-sts-node-8-0: 4813.2 Mbps send, 4986.7 Mbps recv (343.8/356.2 Mbps per peer), 88.17 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-node-9-0: 226.6 Mbps send, 226.0 Mbps recv (22.7/22.6 Mbps per peer), 107.81 ms mean RTT, 1 retransmits
  ssc-2216z-f2eecd-sts-pn-0: 124.4 Mbps send, 134.5 Mbps recv (20.7/22.4 Mbps per peer), 181.95 ms mean RTT, 6 retransmits
  ssc-2216z-f2eecd-sts-pn-1: 3443.2 Mbps send, 3428.9 Mbps recv (491.9/489.8 Mbps per peer), 7.77 ms mean RTT, 19 retransmits
  ssc-2216z-f2eecd-sts-pn-2: 109.0 Mbps send, 122.0 Mbps recv (15.6/17.4 Mbps per peer), 194.93 ms mean RTT, 26 retransmits
  ssc-2216z-f2eecd-sts-sdf-0: 88.5 Mbps send, 88.4 Mbps recv (14.8/14.7 Mbps per peer), 275.08 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-sdf-1: 1486.0 Mbps send, 2453.4 Mbps recv (297.2/490.7 Mbps per peer), 11.46 ms mean RTT, 588 retransmits
  ssc-2216z-f2eecd-sts-sdf-2: 788.6 Mbps send, 787.7 Mbps recv (98.6/98.5 Mbps per peer), 30.88 ms mean RTT, 5 retransmits
  ssc-2216z-f2eecd-sts-sp-0: 155.7 Mbps send, 155.7 Mbps recv (17.3/17.3 Mbps per peer), 156.63 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-sp-1: 221.1 Mbps send, 234.1 Mbps recv (55.3/58.5 Mbps per peer), 169.80 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-sp-2: 154.2 Mbps send, 136.2 Mbps recv (22.0/19.5 Mbps per peer), 157.02 ms mean RTT, 4 retransmits
  ssc-2216z-f2eecd-sts-wx-0: 222.0 Mbps send, 224.5 Mbps recv (37.0/37.4 Mbps per peer), 217.55 ms mean RTT, 0 retransmits
  ssc-2216z-f2eecd-sts-wx-1: 4803.4 Mbps send, 3285.8 Mbps recv (600.4/410.7 Mbps per peer), 4.53 ms mean RTT, 681 retransmits
  ssc-2216z-f2eecd-sts-wx-2: 884.5 Mbps send, 872.0 Mbps recv (126.4/124.6 Mbps per peer), 122.79 ms mean RTT, 0 retransmits

It seems like the total bandwidth of each node is pretty variable, and lower than I expected. There we several runs I saw where even the small, 7 node max TPS topology has nodes with < 100 Mbps total bandwidth, which seems like this would negatively affect MaxTPS test. I think we need to do a little more digging to figure out what's causing this variance. This might be related to the way we install network latency, as bandwidth is much higher when we don't install manual latency. This might also be an artifact of iperf3 in latency environments too, I'm not sure. I think it probably has to do with tcp socket buffer sizes or something related. Here's the small 7 node topology with no latency:

Test ID: benchmark-20250829-224249
Timestamp: 2025-08-29 22:42:52

Infrastructure Configuration:
- Nodes: 7
- Average peers per node: 6.0
- Total connections: 42
- Test duration: 30 seconds
- Network delays: disabled

Aggregate Performance Metrics:
- Connection failures: 0
- Average total throughput per node: 2827.4 Mbps send, 2827.4 Mbps recv
- Average per-peer throughput: 471.2 Mbps send, 471.2 Mbps recv

RTT Latency Statistics (across all nodes):
- Minimum: 0.29 ms
- Mean: 5.79 ms
- Maximum: 13.73 ms

Individual Node Results:
  ssc-2242z-fb3eb9-sts-lo-0: 1062.1 Mbps send, 1076.9 Mbps recv (177.0/179.5 Mbps per peer), 4.99 ms mean RTT, 907 retransmits
  ssc-2242z-fb3eb9-sts-node-0-0: 1319.6 Mbps send, 1898.9 Mbps recv (219.9/316.5 Mbps per peer), 4.29 ms mean RTT, 891 retransmits
  ssc-2242z-fb3eb9-sts-node-1-0: 4238.9 Mbps send, 4467.2 Mbps recv (706.5/744.5 Mbps per peer), 5.05 ms mean RTT, 348 retransmits
  ssc-2242z-fb3eb9-sts-node-2-0: 4821.3 Mbps send, 4109.0 Mbps recv (803.6/684.8 Mbps per peer), 6.25 ms mean RTT, 594 retransmits
  ssc-2242z-fb3eb9-sts-node-3-0: 1630.0 Mbps send, 1407.9 Mbps recv (271.7/234.7 Mbps per peer), 8.59 ms mean RTT, 419 retransmits
  ssc-2242z-fb3eb9-sts-pn-0: 1442.0 Mbps send, 1330.9 Mbps recv (240.3/221.8 Mbps per peer), 4.65 ms mean RTT, 342 retransmits
  ssc-2242z-fb3eb9-sts-sdf-0: 5278.2 Mbps send, 5501.3 Mbps recv (879.7/916.9 Mbps per peer), 6.69 ms mean RTT, 265 retransmits

Copy link
Contributor

@bboston7 bboston7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, the variance in individual node results is shocking. It seems like this would definitely impact our various performance tests.

Comment on lines +18 to 21
open BenchmarkDaemonSet
open ApiRateLimit
open System
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more unused imports: BenchmarkDaemonSet, ApiRateLimit, and System

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is from an earlier commit, as they're all used now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BenchmarkDaemonSet and ApiRateLimit are still showing up as unused because you're using them fully qualified in this file (for example, BenchmarkDaemonSet.createTcpTuningDaemonSet). With the import, you can just call createTcpTuningDaemonSet directly. If all usages are fully qualified, you don't need the import. Style-wise, I think either is fine, but doing both is unnecessary.

@SirTyson
Copy link
Contributor Author

SirTyson commented Sep 2, 2025

Wow, the variance in individual node results is shocking. It seems like this would definitely impact our various performance tests.

Ya, looking into this a bit more I'm fairly confident it has to do with the default TCP buffer sizes being too small given the way in which we simulate latency. That being said, it looks like a lot of other blockchains recommended turning kernel network settings way up, which might be something we want to do as well. Currently working on a test to see if I can verify this behavior

@anupsdf anupsdf requested a review from Copilot September 2, 2025 20:41
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a network infrastructure benchmarking feature that mirrors the stellar-core network topology to measure network performance characteristics. The benchmark uses iperf3 to simulate P2P traffic patterns and provides detailed performance metrics before or instead of running stellar-core tests.

  • Bidirectional performance testing using iperf3 with coordinated traffic generation
  • Infrastructure topology mirroring with identical peer connections and network delays
  • Comprehensive results parsing and reporting with per-node and aggregate metrics

Reviewed Changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
src/scripts/start-benchmark-client.sh Client script for coordinated iperf3 bidirectional testing
src/scripts/parse_benchmark_results.py Python parser for aggregating and formatting benchmark results
src/FSLibrary/StellarSupercluster.fs Main entry point integration for benchmark execution
src/FSLibrary/StellarStatefulSets.fs Core benchmark orchestration and result collection
src/FSLibrary/StellarMissionContext.fs Mission context configuration fields
src/FSLibrary/StellarDestination.fs File path utility enhancement
src/FSLibrary/StellarCoreCfg.fs Stellar core configuration update
src/FSLibrary/FSLibrary.fsproj Project file dependency addition
src/FSLibrary/BenchmarkDaemonSet.fs Kubernetes resource creation for benchmark pods
src/FSLibrary.Tests/Tests.fs Test configuration updates
src/App/Program.fs Command line argument parsing for benchmark flags

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +300 to +348
imbalance_percent = abs(total_network_send - total_network_recv) / total_network_send * 100
if imbalance_percent > 5.0: # Alert if more than 5% imbalance
Copy link

Copilot AI Sep 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Division by zero vulnerability when total_network_send is 0. This check should be moved inside the existing if total_network_send > 0: condition on line 299.

Copilot uses AI. Check for mistakes.
@SirTyson
Copy link
Contributor Author

SirTyson commented Sep 3, 2025

I've identified the issue with the bandwidth and added a fix in the last comment. When installing latency, the default TCP buffer configs were very small (as the container fills the buffers, but then artificially waits before servicing it to simulate latency). This was destroying the node bandwidth.

I've added a flag --enable-tcp-tuning. When set, I spin up a daemon set per node with admin privileges and set the kernel networking settings such that we can still have artificial latency, but maintain bandwidth. Here are the results from running without the flag:

Test ID: benchmark-20250903-200906
Timestamp: 2025-09-03 20:09:13

Infrastructure Configuration:
- Nodes: 7
- Average peers per node: 6.0
- Total connections: 42
- Test duration: 30 seconds
- Network delays: enabled

Aggregate Performance Metrics:
- Connection failures: 0
- Average total throughput per node: 2826.1 Mbps send, 2826.1 Mbps recv
- Average per-peer throughput: 471.0 Mbps send, 471.0 Mbps recv

RTT Latency Statistics (across all nodes):
- Minimum: 0.02 ms
- Mean: 85.89 ms
- Maximum: 239.93 ms

Individual Node Results:
  ssc-2008z-47d60a-sts-lo-0: 476.7 Mbps send, 2360.0 Mbps recv (79.5/393.3 Mbps per peer), 85.87 ms mean RTT, 38 retransmits
  ssc-2008z-47d60a-sts-node-0-0: 285.7 Mbps send, 1977.8 Mbps recv (47.6/329.6 Mbps per peer), 84.44 ms mean RTT, 39 retransmits
  ssc-2008z-47d60a-sts-node-1-0: 492.6 Mbps send, 2460.2 Mbps recv (82.1/410.0 Mbps per peer), 33.28 ms mean RTT, 46 retransmits
  ssc-2008z-47d60a-sts-node-2-0: 7665.0 Mbps send, 8907.7 Mbps recv (1277.5/1484.6 Mbps per peer), 94.57 ms mean RTT, 16 retransmits
  ssc-2008z-47d60a-sts-node-3-0: 2723.6 Mbps send, 755.0 Mbps recv (453.9/125.8 Mbps per peer), 130.81 ms mean RTT, 0 retransmits
  ssc-2008z-47d60a-sts-pn-0: 7718.8 Mbps send, 1307.5 Mbps recv (1286.5/217.9 Mbps per peer), 136.16 ms mean RTT, 0 retransmits
  ssc-2008z-47d60a-sts-sdf-0: 420.1 Mbps send, 2014.4 Mbps recv (70.0/335.7 Mbps per peer), 36.07 ms mean RTT, 59 retransmits

vs. with the flag:

Test ID: benchmark-20250903-201109
Timestamp: 2025-09-03 20:11:16

Infrastructure Configuration:
- Nodes: 7
- Average peers per node: 6.0
- Total connections: 42
- Test duration: 30 seconds
- Network delays: enabled

Aggregate Performance Metrics:
- Connection failures: 0
- Average total throughput per node: 12489.9 Mbps send, 12489.9 Mbps recv
- Average per-peer throughput: 2081.7 Mbps send, 2081.7 Mbps recv

RTT Latency Statistics (across all nodes):
- Minimum: 0.03 ms
- Mean: 89.02 ms
- Maximum: 279.92 ms

Individual Node Results:
  ssc-2010z-2c9d39-sts-lo-0: 17597.3 Mbps send, 17075.3 Mbps recv (2932.9/2845.9 Mbps per peer), 82.10 ms mean RTT, 0 retransmits
  ssc-2010z-2c9d39-sts-node-0-0: 23801.5 Mbps send, 21743.1 Mbps recv (3966.9/3623.8 Mbps per peer), 61.71 ms mean RTT, 81 retransmits
  ssc-2010z-2c9d39-sts-node-1-0: 6679.8 Mbps send, 8054.9 Mbps recv (1113.3/1342.5 Mbps per peer), 105.77 ms mean RTT, 46 retransmits
  ssc-2010z-2c9d39-sts-node-2-0: 4226.6 Mbps send, 1938.7 Mbps recv (704.4/323.1 Mbps per peer), 116.38 ms mean RTT, 0 retransmits
  ssc-2010z-2c9d39-sts-node-3-0: 2358.1 Mbps send, 356.5 Mbps recv (393.0/59.4 Mbps per peer), 119.72 ms mean RTT, 0 retransmits
  ssc-2010z-2c9d39-sts-pn-0: 18845.0 Mbps send, 23189.2 Mbps recv (3140.8/3864.9 Mbps per peer), 41.90 ms mean RTT, 7 retransmits
  ssc-2010z-2c9d39-sts-sdf-0: 13921.0 Mbps send, 15071.7 Mbps recv (2320.2/2512.0 Mbps per peer), 95.58 ms mean RTT, 7 retransmits

This was definitely a significant bottleneck. 23.0.1 MaxTPS went up by about 22% with this flag set, from 2187 to 2725. We've had many experiments recently that didn't show any real TPS increase, and this was most likely to blame.

To run performance tests, add the following flags to your command: --benchmark-infra true --enable-tcp-tuning. This will run a brief network performance benchmark with the TCP settings applied.

I'm not sure if this is the best way to do this from an implementation perspecitve, and it's kinda hacky. I couldn't find a way to change these kernel settings from the container, but did it on the node itself. This means that setting changes persist run-to-run. To account for this, if the TCP optimization flag is not set, I still "upgrade" the TCP settings, but back to the non-optimal Linux defaults. This is purely for A-B testing. I think longer term this shouldn't be a flag, but should just automatically upgrade TCP settings unconditionally to the better variant. However, given that we have lots of perf testing in flight right now, I think we still need the A-B test.

@SirTyson SirTyson force-pushed the latency-benchmark branch 2 times, most recently from d9b5f42 to 9fb1d78 Compare September 4, 2025 00:05
@SirTyson SirTyson force-pushed the latency-benchmark branch 2 times, most recently from 46f63f7 to 7e098f8 Compare October 8, 2025 22:43
@SirTyson
Copy link
Contributor Author

SirTyson commented Oct 8, 2025

I've addressed all the comments and rebased, so this should be ready for review.

I've finally got the network stable. There were two required changes. First, we needed to modify the TCP settings of the workers themselves to provide suitable bandwidth when artificial delay was enabled. Secondly, we needed to add fq for peer fairness. Without fq, given the way we install latency, whichever node has the lowest latency dominates our bandwidth. With fq, peers are more fairly served regardless of their latency.

To install the cluster level TCP improvements, run with --enable-tcp-tuning. These are persistent changes, so they only need to be run once. For this reason, the flag is not set by default, nor do I recommend setting it. However, I've updated the reproducible max TPS guide to invoke this flag.

Because the TCP fix is cluster wide and persistent, there's really no good way of A-B testing older images without interfering with other runs on the cluster. I did some benchmarking against the 23.0.1 release. docker-registry.services.stellar-ops.com/dev/stellar-core:23.0.1-2668.050eacf11.focal-perftests for reference. Before these changes, I saw around ~2250 TPS. After these changes, I see ~2650 TPS. Given that we can't really back test, I'd recommend just running any experiments again with the new cluster settings to establish a new baseline for any perf experiments.

Copy link
Contributor

@bboston7 bboston7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is mostly looking good. It's a little hard to review the correctness of the new options, but they seem properly contained behind flags and nothing jumped out to me as incorrect.

Feel free to ignore the stylistic nitpicks if you disagree with them, but I do think the comments around the TCP tuning flow are worth addressing.

Comment on lines +273 to +325
V1DaemonSet(
metadata = V1ObjectMeta(name = name, labels = labels, namespaceProperty = nCfg.NamespaceProperty),
spec =
V1DaemonSetSpec(
selector = V1LabelSelector(matchLabels = dict [ ("app", name) ]),
template =
V1PodTemplateSpec(
metadata = V1ObjectMeta(labels = dict [ ("app", name) ]),
spec =
V1PodSpec(
hostNetwork = System.Nullable<bool>(true),
hostPID = System.Nullable<bool>(true),
containers =
[| V1Container(
name = name,
image = "busybox:latest",
command = [| "/bin/sh" |],
args = [| sprintf "/scripts/%s" scriptName; "--daemon" |],
volumeMounts =
[| V1VolumeMount(
name = "tcp-tuning-script",
mountPath = "/scripts",
readOnlyProperty = System.Nullable<bool>(true)
) |],
securityContext =
V1SecurityContext(
privileged = System.Nullable<bool>(true),
capabilities = V1Capabilities(add = [| "SYS_ADMIN"; "NET_ADMIN" |])
),
resources =
V1ResourceRequirements(
requests =
dict [ ("memory", ResourceQuantity("50Mi"))
("cpu", ResourceQuantity("10m")) ],
limits =
dict [ ("memory", ResourceQuantity("100Mi"))
("cpu", ResourceQuantity("50m")) ]
)
) |],
volumes =
[| V1Volume(
name = "tcp-tuning-script",
configMap =
V1ConfigMapVolumeSource(
name = "tcp-tuning-script",
defaultMode = System.Nullable<int>(0o755)
)
) |],
tolerations = [| V1Toleration(operatorProperty = "Exists") |]
)
)
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: The nesting gets pretty deep here. This might be more readable by building up some of these inner records as local variables

Comment on lines +18 to 21
open BenchmarkDaemonSet
open ApiRateLimit
open System
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BenchmarkDaemonSet and ApiRateLimit are still showing up as unused because you're using them fully qualified in this file (for example, BenchmarkDaemonSet.createTcpTuningDaemonSet). With the import, you can just call createTcpTuningDaemonSet directly. If all usages are fully qualified, you don't need the import. Style-wise, I think either is fine, but doing both is unnecessary.

"Failed to retrieve results from pod %s (kubectl exec failed): %s"
pod.Metadata.Name
stderr
else if String.IsNullOrWhiteSpace(output) then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not worth fixing throughout this PR, but as a general style nit for the future: The parentheses after this function call are unnecessary. This can be String.IsNullOrWhiteSpace output, which is the idiomatic form for ML-like languages. Parentheses are only needed around more complex expressions, not variable lookups or literals.

# Keep running for a bit to ensure settings propagate
sleep 10
echo "TCP settings verification complete on node $(hostname)"
sleep infinity
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this pseudo-daemon mode? Does supercluster just not like it if a pod exits on its own?

namespaceContent = NamespaceContent(self, nCfg.missionContext.apiRateLimit, nCfg.NamespaceProperty)
)
// services, ingresses or anything. Optionally sets up TCP tuning daemonsets.
member self.MakeEmptyFormation(nCfg: NetworkCfg, ?skipTcpConfig: bool) : StellarFormation =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need the skipTcpConfig argument? Can we use nCfg.missionContext.enableTcpTuning instead?

Comment on lines +444 to +445
member self.DeployTcpTuningDaemonSet() : unit =
if self.NetworkCfg.missionContext.enableTcpTuning then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this function should either be renamed to something like MaybeDeployTcpTuningDaemonSet, or the conditional on enableTcpTuning should be removed. As it is, the call sites for DeployTcpTuningDaemonSet look a little strange.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants