NVIDIA and Oracle to Accelerate Enterprise AI and Data Processing

Oracle and NVIDIA Expand Strategic Collaboration to Power the Next Generation of Enterprise AI

AI is rapidly reshaping how enterprises build, deploy, and scale intelligent applications. As organizations demand faster, more secure, and more scalable AI solutions, industries are turning to platforms that can process data efficiently and deliver intelligence across every layer of business operations.

At Oracle AI World, Oracle announced the OCI Zettascale10 computing cluster, a breakthrough in high-performance AI infrastructure powered by NVIDIA GPUs. Designed for large-scale AI inference and training, Zettascale10 delivers up to 16 zettaflops of peak performance and integrates NVIDIA Spectrum-X Ethernet, the first Ethernet platform built specifically for AI. This combination allows hyperscalers to interconnect millions of GPUs with unprecedented speed and efficiency.

Oracle also unveiled several integrations with NVIDIA technologies, including:

  • Support for NVIDIA NIM microservices in Oracle Database 26ai
  • NVIDIA accelerated computing within the new Oracle AI Data Platform
  • Native availability of NVIDIA AI Enterprise software directly in the OCI Console

“The AI market is being defined by partnerships like Oracle and NVIDIA,” said Mahesh Thiagarajan, EVP of Oracle Cloud Infrastructure. “OCI Zettascale10 delivers multi-gigawatt capacity for the most demanding AI workloads, while the native integration of NVIDIA AI Enterprise ensures customers can easily innovate across OCI’s 200+ cloud services.”

“Together, Oracle and NVIDIA are breaking new ground in accelerated computing,” added Ian Buck, VP of Hyperscale and HPC at NVIDIA. “We’re streamlining database AI pipelines, speeding data processing, and making inference easier to deploy and scale on OCI.”

Accelerating AI Database Workloads

Oracle Database 26ai, the company’s next-generation database, now features enhanced support for AI vector workloads. New APIs integrate with NVIDIA NeMo Retriever, enabling developers to build retrieval-augmented generation (RAG) pipelines using NVIDIA NIM microservices.

The NeMo Retriever suite includes:

  • Extraction models for large-scale multimodal data ingestion
  • Embedding models for vector representation
  • Reranking models for improved accuracy
  • LLMs for generating contextually accurate responses

Oracle also introduced the Oracle Private AI Services Container, a flexible deployment solution for AI services across cloud and on-premises environments. Initially optimized for CPUs, the service will soon support NVIDIA GPUs using the NVIDIA cuVS open-source library — greatly accelerating vector embedding and index generation tasks.

Oracle AI Data Platform with NVIDIA RAPIDS Acceleration

The Oracle AI Data Platform now integrates NVIDIA accelerated computing to unify enterprise data, AI models, and governance under one ecosystem. The platform includes a built-in GPU option and a NVIDIA RAPIDS Accelerator for Apache Spark plug-in, enabling GPU-powered analytics, ETL, and machine learning pipelines — all without requiring code changes.

The RAPIDS Accelerator leverages the NVIDIA cuDF library and Spark’s distributed framework to drastically improve performance for data processing and AI workloads.

Powering Advanced Enterprise AI Applications

Oracle Media and Entertainment is using NVIDIA NeMo Curator and Nemotron Vision Language Models (VLMs) to enhance video understanding and automate preprocessing steps like decoding, segmentation, and transcoding. This pipeline enables large-scale video annotation and captioning, improving the quality of training data for downstream models.

Additionally, NVIDIA NeMo Retriever Parse, a transformer-based vision-encoder-decoder model, strengthens Oracle Fusion Document Intelligence by enabling deep document understanding — extracting metadata and preserving complex document structures for use in multimodal and agentic RAG applications.

These innovations are unified within the Oracle AI Hub, providing a single interface for building, deploying, and managing custom AI solutions. Through the hub, customers can easily deploy NVIDIA NIM microservices and Nemotron LLMs or VLMs via a no-code interface, accelerating time-to-value for enterprise AI.

NVIDIA AI Enterprise Now Natively Available on OCI

Enterprises can now access NVIDIA AI Enterprise directly within the OCI Console, simplifying provisioning and deployment of NVIDIA’s AI software suite. This integration spans OCI’s distributed cloud — including public, sovereign, and dedicated regions — ensuring compliance, scalability, and security for enterprise workloads.

The native integration streamlines access to NVIDIA’s frameworks, libraries, and enterprise-grade support without requiring separate marketplace purchases, helping customers scale AI development with flexible pricing and continuous updates.

NVIDIA was also honored with a 2025 Oracle Partner Award at Oracle AI World, recognizing the companies’ joint efforts to redefine enterprise AI innovation and performance across the OCI ecosystem.

source link

Share your love