Most data center fiber specs were designed for web traffic. AI training and inference are a fundamentally different problem — requiring ultra-low latency, massive east-west throughput, and a network architecture built from the ground up for GPU cluster workloads at scale.
FS-1's network topology is designed from the GPU outward — spine-leaf fabric for uniform east-west latency, redundant border routers, and physically diverse fiber entry from multiple carriers. Every design decision optimizes for the collective communication patterns that dominate AI training workloads.
FS-1's connectivity infrastructure is specified for AI workloads at scale — not adapted from enterprise assumptions. Every parameter was selected to support GPU cluster operations at the density and throughput that serious AI workloads demand.
Request Full Spec Sheet →| Fiber Entry | Multi-carrier diverse — physically separate conduits, separate routes to campus |
| Backbone Capacity | 400G-ready · 800G infrastructure path in roadmap |
| Fiber Type | Single-mode OS2 (9/125μm) — campus backbone and long-haul runs |
| OM5 Available | High-density short-reach runs supporting 400G SWDM4 |
| MMR | Carrier-neutral Meet-Me Room — multiple carrier termination points |
| Path Redundancy | N+1 minimum on all critical fiber paths · diverse physical routing |
| Switching Fabric | Spine-leaf architecture · uniform east-west latency across all GPU pairs |
| Protocol Support | RoCEv2 lossless fabric · ECN and PFC enabled · DCQCN congestion control |
| Latency Target | <1μs intra-cluster · <5μs rack-to-rack |
| Transceiver Standards | QSFP-DD 400G · OSFP 800G-ready ports on border layer |
| DCI | Data Center Interconnect planned for campus expansion — dark fiber and coherent DWDM |
| Peering | Carrier-neutral — Tier 1 and regional fiber provider access |
400G is where serious AI infrastructure operates today. 800G is where it's heading. Photonic switching, co-packaged optics, and in-network computing are the technologies that will define the next generation of AI cluster performance. FS-1's infrastructure is designed with that trajectory in mind — not locked into today's assumptions.
The physical infrastructure — conduit, cable pathways, power distribution to network hardware, and the switching room footprint — is sized to support the upgrade to 800G and beyond without a forklift replacement of the building fabric.
Full connectivity documentation — fiber routing diagrams, switching architecture specs, carrier agreements, and redundancy validation — is available under NDA for qualified prospective customers.