Three Ways to Ride the Flywheel of Cybersecurity AI

The business transformations that generative AI brings come with risks that AI itself can help secure in a kind of flywheel of progress.

Companies who were quick to embrace the open internet more than 20 years ago were among the first to reap its benefits and become proficient in modern network security.

Enterprise AI is following a similar pattern today. Organizations pursuing its advances — especially with powerful generative AI capabilities — are applying those learnings to enhance their security.

For those just getting started on this journey, here are ways to address with AI three of the top security threats industry experts have identified for large language models (LLMs).

AI Guardrails Prevent Prompt Injections

Generative AI services are subject to attacks from malicious prompts designed to disrupt the LLM behind it or gain access to its data. As the report cited above notes, “Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.”

The best antidote for prompt injections are AI guardrails, built into or placed around LLMs. Like the metal safety barriers and concrete curbs on the road, AI guardrails keep LLM applications on track and on topic.

The industry has delivered and continues to work on solutions in this area. For example, NVIDIA NeMo Guardrails software lets developers protect the trustworthiness, safety and security of generative AI services.

AI Detects and Protects Sensitive Data

The responses LLMs give to prompts can on occasion reveal sensitive information. With multifactor authentication and other best practices, credentials are becoming increasingly complex, widening the scope of what’s considered sensitive data.

To guard against disclosures, all sensitive information should be carefully removed or obscured from AI training data. Given the size of datasets used in training, it’s hard for humans — but easy for AI models — to ensure a data sanitation process is effective.

An AI model trained to detect and obfuscate sensitive information can help safeguard against revealing anything confidential that was inadvertently left in an LLM’s training data.

Using NVIDIA Morpheus, an AI framework for building cybersecurity applications, enterprises can create AI models and accelerated pipelines that find and protect sensitive information on their networks. Morpheus lets AI do what no human using traditional rule-based analytics can: track and analyze the massive data flows on an entire corporate network.

AI Can Help Reinforce Access Control

Finally, hackers may try to use LLMs to get access control over an organization’s assets. So, businesses need to prevent their generative AI services from exceeding their level of authority.

The best defense against this risk is using the best practices of security-by-design. Specifically, grant an LLM the least privileges and continuously evaluate those permissions, so it can only access the tools and data it needs to perform its intended functions. This simple, standard approach is probably all most users need in this case.

However, AI can also assist in providing access controls for LLMs. A separate inline model can be trained to detect privilege escalation by evaluating an LLM’s outputs.

Start the Journey to Cybersecurity AI

No one technique is a silver bullet; security continues to be about evolving measures and countermeasures. Those who do best on that journey make use of the latest tools and technologies.

To secure AI, organizations need to be familiar with it, and the best way to do that is by deploying it in meaningful use cases. NVIDIA and its partners can help with full-stack solutions in AI, cybersecurity and cybersecurity AI.

Looking ahead, AI and cybersecurity will be tightly linked in a kind of virtuous cycle, a flywheel of progress where each makes the other better. Ultimately, users will come to trust it as just another form of automation.

Learn more about NVIDIA’s cybersecurity AI platform and how it’s being put to use. And listen to cybersecurity talks from experts at the NVIDIA AI Summit in October.


Blog Article: Here

  • Related Posts

    LearnLM outperformed other AI models in a recent technical study.

    LearnLM is our set of AI models and capabilities that infuses learning science into Gemini and the products it powers, including Search, YouTube and Classroom.In a recen…

    AI’s in Style: Ulta Beauty Helps Shoppers Virtually Try New Hairstyles

    Shoppers pondering a new hairstyle can now try styles before committing to curls or a new color. An AI app by Ulta Beauty, the largest specialty beauty retailer in the U.S., uses selfies to show near-instant, highly realistic previews of desired hairstyles. GLAMlab Hair Try On is a digital experience that lets users take a
    Read Article

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Announcing CodeQL Community Packs

    60 of our biggest AI announcements in 2024

    60 of our biggest AI announcements in 2024

    Our remedies proposal in DOJ’s search distribution case

    Our remedies proposal in DOJ’s search distribution case

    How Chrome’s Autofill can drive more conversions at checkout

    How Chrome’s Autofill can drive more conversions at checkout

    The latest AI news we announced in December

    The latest AI news we announced in December

    OpenAI’s latest o1 model now available in GitHub Copilot and GitHub Models

    OpenAI’s latest o1 model now available in GitHub Copilot and GitHub Models