NVIDIA today announced the expansion of NVIDIA DGX Cloud Lepton™ — an AI platform featuring a global compute marketplace that connects developers building agentic and physical AI applications — with GPUs now available from a growing network of cloud providers.Mistral AI, Nebius, Nscale, Firebird, Fluidstack, Hydra Host, Scaleway and Together AI are now contributing NVIDIA Blackwell and other NVIDIA architecture GPUs to the marketplace, expanding regional access to high-performance compute. AWS and Microsoft Azure will be the first large-scale cloud providers to participate in DGX Cloud Lepton. These companies join CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda and Yotta Data Services in the marketplace.To make accelerated computing more accessible to the global AI community, Hugging Face is introducing Training Cluster as a Service. This new offering integrates with DGX Cloud Lepton to seamlessly connect AI researchers and developers building foundation models with the NVIDIA compute ecosystem.
NVIDIA is also working with leading European venture capital firms Accel, Elaia, Partech and Sofinnova Partners to offer DGX Cloud Lepton marketplace credits to portfolio companies, enabling startups to access accelerated computing resources and scale regional development.DGX Cloud Lepton is connecting Europe’s developers to a global AI infrastructure,” said Jensen Huang, founder and CEO of NVIDIA. “With partners across the region, we’re building a network of AI factories that developers, researchers and enterprises can harness to scale local breakthroughs into global innovation.DGX Cloud Lepton simplifies the process of accessing reliable, high-performance GPU resources within specific regions by unifying cloud AI services and GPU capacity from across the NVIDIA compute ecosystem onto a single platform. This enables developers to keep their data local, supporting data governance and sovereign AI requirements.
In addition, by integrating with the NVIDIA software suite — including NVIDIA NIM™ and NeMo™ microservices and NVIDIA Cloud Functions — DGX Cloud Lepton streamlines and accelerates every stage of AI application development and deployment, at any scale. The marketplace works with a new NIM microservice container, which includes support for a broad range of large language models, including the most popular open LLM architectures and more than a million models hosted publicly and privately on Hugging Face.
For cloud providers, DGX Cloud Lepton includes management software that continuously monitors GPU health in real time and automates root-cause analysis, minimizing manual intervention and reducing downtime. This streamlines operations for providers and ensures more reliable access to high-performance computing for customers.
NVIDIA DGX Cloud Lepton Speeds Training and Deployment
Early-access DGX Cloud Lepton customers using the platform to accelerate their strategic AI initiatives include:
- Basecamp Research, which is speeding the discovery and design of new biological solutions for pharmaceuticals, food and industrial and environmental biotechnology by harnessing its 9.8 billion-protein database to pretrain and deploy large biological foundation models.
- EY, which is standardizing multi-cloud access across the global organization to accelerate the development of AI agents for domain- and sector-specific solutions.
- Outerbounds, which enables customers to build differentiated, production-grade AI products powered by the proven reliability of open-source Metaflow.
- Prima Mente, which is advancing neurodegenerative disease research at scale by pretraining large brain foundation models to uncover new disease mechanisms and tools to stratify patient outcomes in clinical settings.
- Reflection, which is building superintelligent autonomous coding systems that handle the most complex enterprise engineering tasks.