AI in Your Own Words: NVIDIA Debuts NeMo Retriever Microservices for Multilingual Generative AI Fueled by Data

In enterprise AI, understanding and working across multiple languages is no longer optional — it’s essential for meeting the needs of employees, customers and users worldwide.

Multilingual information retrieval — the ability to search, process and retrieve knowledge across languages — plays a key role in enabling AI to deliver more accurate and globally relevant outputs.

Enterprises can expand their generative AI efforts into accurate, multilingual systems using NVIDIA NeMo Retriever embedding and reranking NVIDIA NIM microservices, which are now available on the NVIDIA API catalog. These models can understand information across a wide range of languages and formats, such as documents, to deliver accurate, context-aware results at massive scale.

With NeMo Retriever, businesses can now:

  • Extract knowledge from large and diverse datasets for additional context to deliver more accurate responses.
  • Seamlessly connect generative AI to enterprise data in most major global languages to expand user audiences.
  • Deliver actionable intelligence at greater scale with 35x improved data storage efficiency through new techniques such as long context support and dynamic embedding sizing.
New NeMo Retriever microservices reduce storage volume needs by 35x, enabling enterprises to process more information at once and fit large knowledge bases on a single server. This makes AI solutions more accessible, cost-effective and easier to scale across organizations.

Leading NVIDIA partners like DataStax, Cohesity, Cloudera, Nutanix, SAP, VAST Data and WEKA are already adopting these microservices to help organizations across industries securely connect custom models to diverse and large data sources. By using retrieval-augmented generation (RAG) techniques, NeMo Retriever enables AI systems to access richer, more relevant information and effectively bridge linguistic and contextual divides.

Wikidata Speeds Data Processing From 30 Days to Under Three Days 

In partnership with DataStax, Wikimedia has implemented NeMo Retriever to vector-embed the content of Wikipedia, serving billions of users. Vector embedding — or “vectorizing” —  is a process that transforms data into a format that AI can process and understand to extract insights and drive intelligent decision-making.

Wikimedia used the NeMo Retriever embedding and reranking NIM microservices to vectorize over 10 million Wikidata entries into AI-ready formats in under three days, a process that used to take 30 days. That 10x speedup enables scalable, multilingual access to one of the world’s largest open-source knowledge graphs.

This groundbreaking project ensures real-time updates for hundreds of thousands of entries that are being edited daily by thousands of contributors, enhancing global accessibility for developers and users alike. With Astra DB’s serverless model and NVIDIA AI technologies, the DataStax offering delivers near-zero latency and exceptional scalability to support the dynamic demands of the Wikimedia community.

DataStax is using NVIDIA AI Blueprints and integrating the NVIDIA NeMo Customizer, Curator, Evaluator and Guardrails microservices into the LangFlow AI code builder to enable the developer ecosystem to optimize AI models and pipelines for their unique use cases and help enterprises scale their AI applications.

Language-Inclusive AI Drives Global Business Impact

NeMo Retriever helps global enterprises overcome linguistic and contextual barriers and unlock the potential of their data. By deploying robust, AI solutions, businesses can achieve accurate, scalable and high-impact results.

NVIDIA’s platform and consulting partners play a critical role in ensuring enterprises can efficiently adopt and integrate generative AI capabilities, such as the new multilingual NeMo Retriever microservices. These partners help align AI solutions to an organization’s unique needs and resources, making generative AI more accessible and effective. They include:

  • Cloudera plans to expand the integration of NVIDIA AI in the Cloudera AI Inference Service. Currently embedded with NVIDIA NIM, Cloudera AI Inference will include NVIDIA NeMo Retriever to improve the speed and quality of insights for multilingual use cases.
  • Cohesity introduced the industry’s first generative AI-powered conversational search assistant that uses backup data to deliver insightful responses. It uses the NVIDIA NeMo Retriever reranking microservice to improve retrieval accuracy and significantly enhance the speed and quality of insights for various applications.
  • SAP is using the grounding capabilities of NeMo Retriever to add context to its Joule copilot Q&A feature and information retrieved from custom documents.
  • VAST Data is deploying NeMo Retriever microservices on the VAST Data InsightEngine with NVIDIA to make new data instantly available for analysis. This accelerates the identification of business insights by capturing and organizing real-time information for AI-powered decisions.
  • WEKA is integrating its WEKA AI RAG Reference Platform (WARRP) architecture with NVIDIA NIM and NeMo Retriever into its low-latency data platform to deliver scalable, multimodal AI solutions, processing hundreds of thousands of tokens per second.

Breaking Language Barriers With Multilingual Information Retrieval

Multilingual information retrieval is vital for enterprise AI to meet real-world demands. NeMo Retriever supports efficient and accurate text retrieval across multiple languages and cross-lingual datasets. It’s designed for enterprise use cases such as search, question-answering, summarization and recommendation systems.

Additionally, it addresses a significant challenge in enterprise AI — handling large volumes of large documents. With long-context support, the new microservices can process lengthy contracts or detailed medical records while maintaining accuracy and consistency over extended interactions.

These capabilities help enterprises use their data more effectively, providing precise, reliable results for employees, customers and users while optimizing resources for scalability. Advanced multilingual retrieval tools like NeMo Retriever can make AI systems more adaptable, accessible and impactful in a globalized world.

Availability

Developers can access the multilingual NeMo Retriever microservices, and other NIM microservices for information retrieval, through the NVIDIA API catalog, or a no-cost, 90-day NVIDIA AI Enterprise developer license.

Learn more about the new NeMo Retriever microservices and how to use them to build efficient information retrieval systems.

Blog Article: Here

  • Related Posts

    Beyond CAD: How nTop Uses AI and Accelerated Computing to Enhance Product Design

    As a teenager, Bradley Rothenberg was obsessed with CAD: computer-aided design software. Before he turned 30, Rothenberg channeled that interest into building a startup, nTop, which today offers product developers — across vastly different industries — fast, highly iterative tools that help them model and create innovative, often deeply unorthodox designs. One of Rothenberg’s key
    Read Article

    (AI)ways a Cut Above: GeForce RTX 50 Series Accelerates New DaVinci Resolve 20 Studio Video Editing Software

    As AI-powered tools continue to evolve, NVIDIA GeForce RTX 50 Series and NVIDIA RTX PRO GPUs, based on the NVIDIA Blackwell architecture, are driving faster, smarter creative workflows. Blackmagic Design’s public beta release of DaVinci Resolve Studio 20 software puts new capabilities and a suite of RTX-accelerated AI-powered features into the hands of video editors,
    Read Article

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    What the heck is MCP and why is everyone talking about it?

    What the heck is MCP and why is everyone talking about it?

    The AI Paradox: Untangling Employee Hesitation to Unleash Agentic AI

    The AI Paradox: Untangling Employee Hesitation to Unleash Agentic AI

    Beyond CAD: How nTop Uses AI and Accelerated Computing to Enhance Product Design

    Beyond CAD: How nTop Uses AI and Accelerated Computing to Enhance Product Design

    6 highlights from Google Cloud Next 25

    6 highlights from Google Cloud Next 25

    Pixel 9a’s 3 big design updates

    Pixel 9a’s 3 big design updates

    Announcing up to 85% price reductions for Amazon S3 Express One Zone

    Announcing up to 85% price reductions for Amazon S3 Express One Zone