The unprecedented synergy between Meta, Oracle, and NVIDIA is revolutionizing the landscape of data centers dedicated to AI. The adoption of *Spectrum-X*, a cutting-edge Ethernet switch, meets the dizzying rise of artificial intelligence systems. Every company aspires to transform its infrastructures into true *“gigascale AI factories.”* The efficiency of model training and the speed of deployment within massive clusters are becoming decisive issues for these tech giants.
Meta and Oracle adopt NVIDIA Spectrum-X to modernize their AI infrastructures
Meta and Oracle are entering into a strategic collaboration with NVIDIA, integrating the Ethernet switching system Spectrum-X into their data centers dedicated to artificial intelligence. This initiative aims to meet the growing demands of large-scale AI systems, transforming data centers into true “giga-scale AI factories.” Jensen Huang, CEO of NVIDIA, describes Spectrum-X as a “nervous system” that connects millions of GPUs, facilitating the training of the largest artificial intelligence models.
Optimizing AI training efficiency
Oracle plans to use Spectrum-X in its Vera Rubin architecture, enabling an effective interconnection of millions of GPUs. Mahesh Thiagarajan, Vice President of Oracle Cloud Infrastructure, states that this configuration will enhance efficiency, thereby accelerating the deployment of new AI models. Meanwhile, Meta is also integrating these Ethernet switches within its internal platform, FBOSS, to manage the large-scale network. Gaya Nagarajan, Vice President of Network Engineering at Meta, emphasizes the importance of an open and efficient network to support increasingly large AI models and provide services to billions of users.
Flexibility and interoperability at the heart of design
Flexibility is presented as a central element in the development of data centers, according to Joe DeLaere, Head of NVIDIA’s Accelerated Computing Solutions portfolio. NVIDIA’s MGX system, with its modular architecture, allows partners to combine different processing units, storage, and network components according to their needs. This approach promotes interoperability, offering a consistent framework across multiple generations of hardware.
Energy efficiency and power challenges
As AI models grow, energy efficiency emerges as a predominant challenge for data centers. NVIDIA is committed to a holistic approach to improve energy utilization and scalability. The shift to an 800-volt DC power supply, for example, minimizes thermal losses and enhances efficiency. This new power management also reduces spikes on the electrical grid, cutting maximum power requirements by up to 30% and thus increasing computational capacity.
Scalability and interconnection between data centers
The MGX system also facilitates the expansion of data centers, supporting NVLink connectivity for vertical scaling and Spectrum-X Ethernet for horizontal growth. Gilad Shainer, Senior Vice President of Network Engineering at NVIDIA, indicates that MGX can link multiple data centers into a single integrated system. This meets the needs of companies like Meta, which require support for massively distributed AI training operations.
Partnerships and expanding the AI ecosystem
NVIDIA views Spectrum-X as a solution to make AI infrastructure more accessible and efficient at different scales. This Ethernet system, designed specifically for AI workloads such as training and inference, offers up to 95% effective bandwidth. This technology far surpasses traditional Ethernet. Through collaborations with companies such as Cisco, Meta, and Oracle Cloud Infrastructure, Spectrum-X is expanding to a variety of environments, ranging from hyperscalers to enterprises.
Sustainability and future readiness
NVIDIA’s next Vera Rubin architecture is expected to be commercially available in the second half of 2026. Associated products, such as the Rubin CPX model, will work in tandem with Spectrum-X and MGX to support the next generation of AI factories. The Spectrum-X and XGS technologies share a similar hardware architecture but apply different algorithms for varying distances, thereby optimizing communications between data centers.
Collaboration on the energy transition
NVIDIA collaborates with various partners, from chip components to power supply, to support the transition to 800-volt DC. This collaborative approach includes partners such as Onsemi, Infineon, Delta, and Schneider Electric, ensuring seamless harmonization between all systems in high-density AI environments.
Performance for hyperscalers
The Spectrum-X technology has been specifically designed for distributed computing and AI workloads. It incorporates adaptive routing as well as congestion control based on telemetry, eliminating network hotspots and ensuring stable performance. These attributes allow for increased training and inference speeds. The scalability offered by Spectrum-X enables organizations to optimize their GPU investments while responding to the growing demands associated with AI training, which is vital for companies like Meta.
Hardware and software synergy
NVIDIA places a particular emphasis on software optimization and considers this approach essential. The company continues to improve efficiency by aligning hardware development with software development. Investments in frameworks such as Dynamo, TensorRT-LLM, and algorithms like speculative decoding aim to enhance the yield and performance of AI models.
AI and scalable infrastructures
The Spectrum-X platform, encompassing Ethernet switches and SuperNICs, forms the first Ethernet system specifically designed for AI workloads. It ensures efficient connections between millions of GPUs while maintaining predictable performance across AI data centers. With congestion control technologies achieving up to 95% data throughput, Spectrum-X marks a significant advancement over traditional Ethernet.
For more information on related topics, including investments in artificial intelligence and infrastructure development, check out these articles: The stakes of Chinese investment, The wars of AI security, Partnerships with government and businesses, Partnership between OpenAI and Oracle, and South Korea and its new data center.
Frequently asked questions about the partnership of Meta, Oracle, and NVIDIA Spectrum-X for AI data centers
What is Spectrum-X and how does it improve the performance of AI dedicated data centers?
Spectrum-X is an Ethernet switching technology developed by NVIDIA, designed to meet the growing demands of large-scale AI systems. It improves AI training efficiency by providing fast connectivity and congestion control, allowing data centers to process massive parameter models without slowdowns.
How does Oracle integrate Spectrum-X into its Vera Rubin architecture?
Oracle uses Spectrum-X Ethernet to build large-scale AI factories. This integration will enable more efficient connections of millions of GPUs, thereby facilitating the rapid training and deployment of new AI models for its clients.
What is the importance of openness and interoperability of systems in Meta’s AI infrastructure?
By integrating Spectrum-X into its open switching system (FBOSS), Meta ensures that its network is flexible and interoperable. This allows the network to adapt to the evolving AI needs and provide services to billions of users efficiently.
What advantages do NVIDIA’s modular systems, such as the MGX system, offer to technology partners?
The MGX system is modular, allowing partners to mix and match different processing units, storage, and switching according to their needs. This flexibility helps optimize time to market and ensures that infrastructures are prepared for the future.
How is NVIDIA addressing energy efficiency challenges in data centers?
NVIDIA is working on improving energy efficiency by transitioning to 800-volt DC power and incorporating power smoothing technologies to reduce electrical demand spikes. This contributes to optimizing performance per watt in data centers while enabling greater computational capacity.
Why is the collaboration between NVIDIA, Meta, and Oracle crucial for the future of AI data centers?
This collaboration aims to make AI infrastructure more efficient and accessible at different scales. By joining forces, these companies can develop solutions specifically designed for AI workloads, optimizing performance and reducing operational costs.
What are the advantages of Spectrum-X compared to traditional Ethernet for AI workloads?
Spectrum-X offers up to 95% effective bandwidth, far exceeding the performance of traditional Ethernet, which typically reaches only about 60%. This is particularly crucial for training and inference tasks in AI, where every millisecond counts.
How does NVIDIA plan to integrate Spectrum-X with its upcoming Vera Rubin architecture?
NVIDIA plans for the Vera Rubin architecture, which will be commercially available in 2026, to work in conjunction with Spectrum-X and MGX Ethernet systems, enhancing connectivity between data centers and supporting the next generation of AI factories.