AI-Grade Connectivity · FS-1 · Eastern US

The network
AI clusters demand.

Most data center fiber specs were designed for web traffic. AI training and inference are a fundamentally different problem — requiring ultra-low latency, massive east-west throughput, and a network architecture built from the ground up for GPU cluster workloads at scale.

Discuss Connectivity → See FS-1 Campus →
FS-1 · Connectivity Snapshot
EntryMulti-Carrier Diverse
Backbone400G-Ready
Path RedundancyPhysically Separate Routes
Fiber TypeSingle-Mode OS2
MMRCarrier-Neutral
Roadmap800G-Ready Infrastructure
Why AI Connectivity Is Different

Not legacy telco.
AI-native.

Legacy Data Center Thinking
North-south dominant traffic — clients to servers, content delivery, web requests
Latency tolerance in the tens of milliseconds — acceptable for traditional enterprise workloads
10G/25G per server sufficient for most workloads
Packet loss handled gracefully by TCP retransmission
Single-carrier uplinks acceptable — redundancy is nice-to-have
Standard Ethernet fabric, commodity switching
AI Cluster Requirements
East-west dominant — GPU-to-GPU collective operations, all-reduce, all-gather across thousands of nodes simultaneously
Microsecond latency — distributed training stalls at every synchronization barrier; network latency directly impacts training throughput
400G per GPU server becoming baseline — H100/H200 NVLink clusters require massive intra-cluster bandwidth
Zero packet loss critical — RDMA over Converged Ethernet (RoCE) requires lossless fabric; retransmission destroys GPU utilization
Multi-carrier diverse entry mandatory — a single fiber cut during a training run can mean hours of lost compute at $100K+/hour cost
Purpose-built switching fabric — spine-leaf architecture optimized for uniform low-latency paths across every GPU pair
FS-1 Network Architecture

Built for
cluster scale.

FS-1's network topology is designed from the GPU outward — spine-leaf fabric for uniform east-west latency, redundant border routers, and physically diverse fiber entry from multiple carriers. Every design decision optimizes for the collective communication patterns that dominate AI training workloads.

FS-1 · Network Topology — AI Cluster Architecture 400G READY
Diverse Fiber Entry
Carrier-Neutral MMR
Border / Core Layer
Spine / Leaf Fabric
GPU Compute Nodes
Technical Specification

Every spec
by design.

FS-1's connectivity infrastructure is specified for AI workloads at scale — not adapted from enterprise assumptions. Every parameter was selected to support GPU cluster operations at the density and throughput that serious AI workloads demand.

Request Full Spec Sheet →
Fiber EntryMulti-carrier diverse — physically separate conduits, separate routes to campus
Backbone Capacity400G-ready · 800G infrastructure path in roadmap
Fiber TypeSingle-mode OS2 (9/125μm) — campus backbone and long-haul runs
OM5 AvailableHigh-density short-reach runs supporting 400G SWDM4
MMRCarrier-neutral Meet-Me Room — multiple carrier termination points
Path RedundancyN+1 minimum on all critical fiber paths · diverse physical routing
Switching FabricSpine-leaf architecture · uniform east-west latency across all GPU pairs
Protocol SupportRoCEv2 lossless fabric · ECN and PFC enabled · DCQCN congestion control
Latency Target<1μs intra-cluster · <5μs rack-to-rack
Transceiver StandardsQSFP-DD 400G · OSFP 800G-ready ports on border layer
DCIData Center Interconnect planned for campus expansion — dark fiber and coherent DWDM
PeeringCarrier-neutral — Tier 1 and regional fiber provider access
Redundancy Architecture

A fiber cut
is not a disaster.

01
Diverse Physical Entry
Two or more physically separate fiber routes enter the campus on independent conduits via independent rights-of-way. A single cut — backhoe, rodent, or contractor — cannot take down both paths. This is a baseline requirement for any infrastructure supporting sustained AI training runs.
02
Carrier-Neutral MMR
The Meet-Me Room at FS-1 is carrier-neutral — multiple service providers can terminate on-campus. Customers are not locked into a single carrier relationship. This enables competitive pricing, diverse IP transit, and the ability to bring specific carriers required by enterprise security or compliance policies.
03
Automatic Failover
Border routers are configured for sub-second BGP failover across diverse uplinks. For a distributed training run consuming thousands of GPUs, sub-second failover is the difference between a logged event and a training run that needs to be restarted. The network is engineered for continuity, not just connectivity.
Network Roadmap

Built for
where AI is
heading.

400G is where serious AI infrastructure operates today. 800G is where it's heading. Photonic switching, co-packaged optics, and in-network computing are the technologies that will define the next generation of AI cluster performance. FS-1's infrastructure is designed with that trajectory in mind — not locked into today's assumptions.

The physical infrastructure — conduit, cable pathways, power distribution to network hardware, and the switching room footprint — is sized to support the upgrade to 800G and beyond without a forklift replacement of the building fabric.

400G Live
400G QSFP-DD Backbone · RoCEv2 Lossless Fabric Current standard for H100/H200 GPU cluster deployments. Spine-leaf architecture with uniform east-west latency. ECN and PFC enabled for lossless RDMA transport.
OSFP 800G Border Ports · Coherent DWDM DCI Border and spine layer sized for 800G optics. Coherent DWDM data center interconnect for campus expansion. Supports next-generation GPU platforms (Blackwell Ultra, future NVLink generations).
Beyond Roadmap
Co-Packaged Optics · Photonic Switching · In-Network Compute Co-packaged optics eliminate SerDes power and latency at 1.6T+ speeds. Photonic switching enables nanosecond reconfiguration. In-network compute offloads collective operations from GPU cycles. Infrastructure pathways are preserved for these transitions.
Talk To Our Team

Network specs
on request.

Full connectivity documentation — fiber routing diagrams, switching architecture specs, carrier agreements, and redundancy validation — is available under NDA for qualified prospective customers.