Explore How RTX AI PCs and Workstations Supercharge AI Development at NVIDIA GTC 2025

Generative AI is redefining computing, unlocking new ways to build, train and optimize AI models on PCs and workstations. From content creation and large and small language models to software development, AI-powered PCs and workstations are transforming workflows and  enhancing productivity.

At GTC 2025, running March 17–21 in the San Jose Convention Center, experts from across the AI ecosystem will share insights on deploying AI locally, optimizing models and harnessing cutting-edge hardware and software to enhance AI workloads — highlighting key advancements in RTX AI PCs and workstations.

Develop and Deploy on RTX

RTX GPUs are built with specialized AI hardware called Tensor Cores that provide the compute performance needed to run the latest and most demanding AI models. These high-performance GPUs can help build digital humans, chatbots, AI-generated podcasts and more.

With more than 100 million GeForce RTX and NVIDIA RTX™ GPUs users, developers have a large audience to target when new AI apps and features are deployed. In the session “Build Digital Humans, Chatbots, and AI-Generated Podcasts for RTX PCs and Workstations,” Annamalai Chockalingam, senior product manager at NVIDIA, will showcase the end-to-end suite of tools developers can use to streamline development and deploy incredibly fast AI-enabled applications.

Model Behavior

Large language models (LLMs) can be used for an abundance of use cases — and scale to tackle complex tasks like writing code or translating Japanese into Greek. But since they’re typically trained with a wide spectrum of knowledge for broad applications, they may not be the right fit for specific tasks, like nonplayer character dialog generation in a video game. In contrast, small language models balance need with reduced size, maintaining accuracy while running locally on more devices.

In the session “Watch Your Language: Create Small Language Models That Run On-Device,” Oluwatobi Olabiyi, senior engineering manager at NVIDIA, will present tools and techniques that developers and enthusiasts can use to generate, curate and distill a dataset — then train a small language model that can perform tasks designed for it.

Maximizing AI Performance on Windows Workstations

Optimizing AI inference and model execution on Windows-based workstations requires strategic software and hardware tuning due to diverse hardware configurations and software environments. The session “Optimizing AI Workloads on Windows Workstations: Strategies and Best Practices,” will explore best practices for AI optimization, including model quantization, inference pipeline enhancements and hardware-aware tuning.

A team of NVIDIA software engineers will also cover hardware-aware optimizations for ONNX Runtime, NVIDIA TensorRT and llama.cpp, helping developers maximize AI efficiency across GPUs, CPUs and NPUs.

Advancing Local AI Development

Building, testing and deploying AI models on local infrastructure ensures security and performance even without a connection to cloud-based services. Accelerated with NVIDIA RTX GPUs, Z by HP’s AI solutions provide the tools needed to develop AI on premises while maintaining control over data and IP.

Learn more by attending the following sessions:

  • Dell Pro Max and NVIDIA: Unleashing the Future of AI Development: This session introduces Dell Pro Max PCs, performance laptops and desktops for professionals, powered by NVIDIA RTX GPUs. Discover how this powerful duo can help jumpstart AI initiatives and transform the way AI developers, data scientists, creators and power users innovate.
  • Develop and Observe Gen AI On-Prem With Z by HP GenAI Lab and AI Studio: This session demonstrates how Z by HP solutions simplify local model training and deployment, harnessing models in the NVIDIA NGC catalog and Galileo evaluation technology to refine generative AI projects securely and efficiently.
  • Supercharge Gen AI Development With Z by HP GenAI Lab and AI Studio: This session explores how Z by HP’s GenAI Lab and AI Studio enable on-premises LLM development while maintaining complete data security and control. Learn how these tools streamline the entire AI lifecycle, from experimentation to deployment, while integrating models available in the NVIDIA NGC catalog for collaboration and workflow efficiency.

Developers and enthusiasts can get started with AI development on RTX AI PCs and workstations using NVIDIA NIM microservices. Rolling out today, the initial public beta release includes the Llama 3.1 LLM, NVIDIA Riva Parakeet for automatic speech recognition (ASR), and YOLOX  for computer vision.

NIM microservices are optimized, prepackaged models for generative AI. They span modalities important for PC development, and are easy to download and connect to via industry-standard application programming interfaces.

Attend GTC 2025

From the keynote by NVIDIA founder and CEO Jensen Huang to over 1,000 inspiring sessions, 300+ exhibits, technical hands-on training and tons of unique networking events — GTC is set to put a spotlight on AI and all its benefits.

Follow NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Blog Article: Here

  • Related Posts

    CUDA Accelerated: How CUDA Libraries Bolster Cybersecurity With AI

    Editor’s note: This is the next topic in our new CUDA Accelerated news series, which showcases the latest software libraries, NVIDIA NIM microservices and tools that help developers, software makers and enterprises use GPUs to accelerate their applications. Traditional cybersecurity measures are proving insufficient for addressing emerging cyber threats such as malware, ransomware, phishing and
    Read Article

    March Into Gaming With GeForce NOW’s 14 Must-Play Titles for Spring

    GeForce NOW is blooming further with an array of 14 new titles in March. A garden of gaming delights will have members marching straight into action and adventure this spring, with Ubisoft’s Assassin’s Creed Shadows, Tripwire Interactive’s Killing Floor 3 and Hazelight Studio’s Split Fiction coming to the cloud next week at launch. Start off
    Read Article

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Salesforce Prescribes Agentforce for Health to Speed Time to Treatment and Improve Outcomes with Digital Labor

    Salesforce Prescribes Agentforce for Health to Speed Time to Treatment and Improve Outcomes with Digital Labor

    How Agentic AI Will Ease Healthcare’s Workforce Crisis

    How Agentic AI Will Ease Healthcare’s Workforce Crisis

    AI Agents Can Cut Healthcare Paperwork by 30%, Study Shows

    AI Agents Can Cut Healthcare Paperwork by 30%, Study Shows

    Precina Set to Deliver 24/7, Personalized Diabetes Care Nationally with Agentforce

    Precina Set to Deliver 24/7, Personalized Diabetes Care Nationally with Agentforce

    CUDA Accelerated: How CUDA Libraries Bolster Cybersecurity With AI

    CUDA Accelerated: How CUDA Libraries Bolster Cybersecurity With AI

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service