This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Data Center World
April 20-23, 2026
Walter E. Washington Convention CenterWashington, D.C.

Keynotes

2026 Featured Keynote Speakers

Scott Armul

Scott Armul

Chief Product & Technology Officer,
Vertiv

Shinichiro Gomi

Shinichiro Gomi

Senior General Manager, Mitsubishi Heavy Industries, Ltd.

Sean James

Sean James

Distinguished Engineer – Energy Systems, NVIDIA

Ram Nagappan

Ram Nagappan

Vice President, AI Infrastructure,
Oracle Cloud Infrastructure

Varun Sakalkar

Varun Sakalkar

Distinguished Engineer, Google Datacenters

Jim Simonelli

Jim Simonelli

CTO and SVP, Secure Power Division, Schneider

Innovation at Hyperscale: Building the AI Factories Powering the Next Decade of Digital Infrastructure

Featured Keynote

When: Wednesday, April 22, 2026 | 8:30 am to 9:45 am
Location: Ballrooms A & B, 3rd Level

Artificial intelligence is rapidly transforming data centers from traditional IT environments into highly specialized AI factories built for extreme compute density, massive energy throughput, and global scale. As AI workloads accelerate, operators must rethink how infrastructure is designed, powered, cooled, and deployed.

In this keynote, engineering leaders from Oracle, NVIDIA, and Google will share firsthand insights into how hyperscale organizations are building and operating the next generation of AI infrastructure. The discussion will explore the real-world engineering and business decisions behind modern AI data center design.

From securing power in grid-constrained markets to supporting GPU-dense environments and deploying infrastructure faster than ever before, this session will provide a practical look at how leading organizations are scaling the facilities that power the AI economy.

Attendees will learn:

  • How AI is redefining the data center and transforming facilities into high-performance compute factories.

  • Strategies for securing and managing power as AI infrastructure pushes energy demand to new levels.

  • How operators are supporting extreme rack densities and preparing for next-generation GPU environments.

  • Where liquid and hybrid cooling strategies fit in the future of AI infrastructure.

  • How hyperscalers are accelerating deployment timelines through modular design and supply chain innovation.

  • What infrastructure leaders must do today to prepare for the next decade of AI-driven demand.

Keynote Panel

Ram Nagappan
Ram Nagappan

Vice President, AI Infrastructure,
Oracle Cloud Infrastructure

Ram Nagappan is Vice President in AI Infrastructure at Oracle Cloud Infrastructure (OCI) with 20+ years of “chips-to-grid” experience spanning exascale supercomputers and gigawatt-scale AI data centers. He leads AI infrastructure architecture for large-scale AI and HPC platforms, including OpenAI’s Stargate program, combining executive leadership with hands-on involvement in site diligence, grid interconnection, power-plant architecture, data hall layout, and rack-level design for high-density GPU clusters. 

Read Full Bio

Sean James
Sean James

Distinguished Engineer – Energy Systems,
NVIDIA

Sean James is a Navy veteran and Distinguished Engineer - Energy Systems for NVIDIA, where he's pioneering sustainable data centers and energy solutions. He previously worked in energy and data center research at Microsoft, with over two decades pioneering advanced cloud infrastructure. He led research on data center architecture, power systems, and scaling innovations, including groundbreaking projects like Project Natick (underwater data centers) and integrations of fuel cells, microgrids, and high-density computing.

Read Full Bio

Varun Sakalkar
Varun Sakalkar

Distinguished Engineer,
Google Datacenters

Varun Sakalkar is the Distinguished Engineer in the Datacenter Technology and Systems group for Google Datacenters. As the uberTL for the team, he owns the technology direction for the datacenter systems to achieve lowest TCO and best in deployment, class reliability, sustainability and efficiency for ML and Cloud systems. Varun has been at Google for 15+ years. Prior to that he earned his PhD in computational engineering and Masters in applied math.

Read Full Bio

AI Factories: The Physical Engines Driving AI

Diamond Sponsor Keynote - Vertiv

When: Tuesday, April 21, 2026 | 8:30 am to 9:45 am
Location: Ballrooms A & B, 3rd Level

AI facilities are no longer scaling like conventional data centers. As rack densities move from tens of kilowatts toward hundreds, operators are managing a new class of infrastructure challenge: power, thermal, controls, white space, and services now behave as one interdependent system. In this session, Scott Armul will present a practical operator framework for planning and scaling AI infrastructure as a physical AI engine, where the facility itself becomes part of the compute equation.  

The session will address four realities shaping AI deployments today: extreme densification, compressed time-to-capacity (time to token), rapid campus-scale expansion, and the operational risk created by disparate systems. Scott will outline how operators can reduce these risks by shifting from component-by-component decisions to a converged infrastructure approach that integrates the power train, thermal chain, and controls/services layer as a coordinated system.  

Attendees will also see how repeatable AI building blocks—such as 12.5 MW units scaled into larger campus architectures—can improve deployment predictability, reduce onsite complexity, and preserve flexibility across future compute generations. The emphasis is on operational outcomes: faster deployment, better utilization, and lower integration friction at scale.  

Attendees will leave with three takeaways:     

1. A clear framework for treating AI facilities as integrated physical systems, not isolated infrastructure domains.           

2. A practical scaling model for using repeatable building blocks to accelerate deployment while maintaining flexibility.

3. An operator-focused approach to converged power, thermal, and controls integration that improves reliability, efficiency, and time-to-capacity.

Scott Armul

Chief Product & Technology Officer, Vertiv

Scott Armul was named Chief Product and Technology Officer on January 1, 2026, leading Vertiv’s Technology Office, engineering research and development, and the business units comprising Vertiv's portfolio of solutions, including thermal management, power management, IT systems, infrastructure solutions, and global services. He served as Vertiv’s Executive Vice President, Global Portfolio and Business Units from January 1, 2025, to December 31, 2025. 

Scott began his career with Emerson Network Power (now Vertiv) in 2009 as an MBA Intern in Business Planning and Development and then transitioned into a permanent position as a Strategic Planner in 2010. He progressed through various leadership roles in the company, including Strategic Planning from June 2010 - June 2012, Senior Marketing Manager for Emerson Energy Systems from June 2012 - August 2015, Director of AC Power Product Management from August 2015 – February 2017, Vice President and General Manager of DC Power and Outside Plant for the Americas from February 2017- January 2018, and Vice President and General Manager of Global DC Power and Outside Plant Solutions from January 2018 - July 2022.

Read Full Bio.

Sponsored By:

Powering the Megawatt Era: Why 800VDC Is the Future of Data Center Energy

Diamond Sponsor Keynote - Schneider Electric

When: Wednesday, April 22, 2026 | 8:30 am to 9:45 am
Location: Ballrooms A & B, 3rd Level

As data centers and high-performance computing environments scale to unprecedented levels, traditional power distribution architectures face critical limitations. This presentation explores the next frontier in energy delivery: the 1 MW rack and the evolution beyond sidecar designs through advanced 800 VDC strategies. We will examine how VDC optimizes efficiency, reduces losses, and supports the growing demands of AI workloads and hyperscale infrastructure.

Attendees will gain insights into design considerations, safety standards, and integration approaches that enable sustainable, high-density power distribution for future-ready facilities. Join us to understand why 800VDC is not just an alternative—it’s the foundation for powering the next generation of compute.

Key Takeaways:

  • High-voltage direct current (HVDC) architectures are essential for meeting the massive power demands of AI workloads and hyperscale data centers. Moving beyond traditional sidecar designs unlocks the ability to support 1 MW racks efficiently.

  • HVDC reduces conversion losses, improves energy efficiency, and minimizes infrastructure footprint—critical for sustainable, future-ready facilities.

  • Learn how optimized HVDC strategies lower operational costs while supporting green initiatives. Implementing HVDC at scale requires careful attention to design considerations, safety standards, and integration approaches.

These best practices ensure reliable, secure, and scalable power distribution for next-generation compute environments.

Jim Simonelli
Jim Simonelli

CTO and SVP, Secure Power Division, Schneider

Jim Simonelli, senior vice president and chief technology officer of the Secure Power and Data Center Business at Schneider Electric, leads R&D deployment, common platforms development, and forward-looking activities. With prior experience in leadership positions at Schneider Electric and a venture-backed startup, he brings extensive expertise in technology and innovation. 

Read Full Bio.

Sponsored By:

From Commercial to Industrial: Scaling AI Data Centers with Integrated Energy and Cooling Systems

Diamond Sponsor Keynote - Mitsubishi Heavy Industries, Ltd.

When: Thursday, April 23, 2026 | 8:45 am to 10:00 am
Location: Ballrooms A & B, 3rd Level

As AI data centers outgrow conventional commercial design, they must evolve into true industrial infrastructure. This session introduces MHI’s system level approach, applying proven industrial energy, power, cooling, and control technologies to AI scale facilities.

We illustrate the transition from grid dependent power to hybrid on site generation, from low voltage to medium voltage and HVDC distribution, and from air based to highly efficient water based cooling. We also show how plant grade monitoring and control enable global optimization across efficiency, reliability, and deployment speed.

Attendees will gain a practical framework for building decarbonized AI infrastructure that scales responsibly and resiliently.

3 Takeaways:

1.Why AI data centers must evolve from commercial facilities to industrial infrastructure, using hybrid grid and on-site generation to overcome energy, reliability, and carbon constraints.

2.How integrated industrial-grade optimization—across power generation, HVDC distribution, cooling, heat recovery and plant-level control—outperforms siloed design approaches.

3.How modular systems accelerate megaproject deployment while improving efficiency, scalability, and overall performance.

Shinichiro Gomi
Shinichiro Gomi

Senior General Manager, Mitsubishi Heavy Industries, Ltd.

Shin Gomi was named Senior General Manager, Data Center & Energy Management, Growth Strategy Office for Mitsubishi Heavy Industries, Ltd. (MHI) on April 1, 2024, and oversees all strategies and business development, especially for “Data Center and Energy Management Strategy” for MHI Business Worldwide.

Read Full Bio.

Sponsored By:
Mitsubishi Heavy Industries, Ltd.