Skip to content

Update network-considerations.md #49

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 22, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 7 additions & 10 deletions docs/deployments/deployment-planning/network-considerations.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,23 +10,23 @@ performance and reliability of its virtual block storage devices (logical volume

Protocol-wise, simplyblock implements
[NVMe over Fabrics (NVMe-oF)](../../important-notes/terminology.md#nvme-of-nvme-over-fabrics), meaning that simplyblock
does not require any specific network infrastructure such as Fibre Channel or Infiniband, but works over commodity
Ethernet interconnects.
does not require any specific network infrastructure such as Fibre Channel or Infiniband, but works over any
Ethernet interconnect.

For data transmission, simplyblock provides
[NVMe over TCP (NVMe/TCP)](../../important-notes/terminology.md#nvmetcp-nvme-over-tcp) and
[NVMe over RDMA over Converged Ethernet (NVMe/RoCE)](../../important-notes/terminology.md#nvmeroce-nvme-over-rdma-over-converged-ethernet).

## Network Infrastructure

In terms of bandwidth, simplyblock recommends at least 40GBit/s per second interconnects, but higher is better.
In terms of bandwidth, simplyblock recommends at least 10GBit/s interconnects, but higher is better.
Especially with a high number of cluster nodes and logical volumes, simplyblock can easily saturate 200 GBit/s and
more interconnects.

!!! recommendation
Simplyblock recommends NVIDIA Mellanox network adapters. However, every network adapter, including virtual
ones will work. If using virtual machines, the physical network adapter should be made available to the VM
using PCI-e passthrough (IOMMU).
using PCI-e passthrough (IOSRV).

Additionally, simplyblock recommends a physically separated storage network or using a VLAN to create a virtually
separated network. This can improve performance and minimize network contention.
Expand All @@ -48,10 +48,7 @@ script to pre-test the most important requirements to ensure a smooth installati

Additionally, simplyblock strongly recommends to design any network interconnect as a fully redundant connection. All
commonly found solutions to achieve that are supported, including but not limited to LACP and Static LAG configurations,
stacked switches, bonded NICs.

!!! danger
Simplyblock, internally, always assumes the interconnect to be reliable, failing to provide such an interconnect
may lead to data loss in failure situations.

stacked switches, bonded NICs. Depending on the erasure coding schema chosen and the number of nodes in a cluster,
simplyblock supports either single or concurrent dual node outage, including network outages. If the network fails
for more than one node (two nodes), this will cause a cluster-down and I/O suspension event.