Navigating Agentic AI’s Future: Balancing Innovation and Compliance

Concern over regulatory compliance jumped eight percentage points in a recent year-over-year study of Salesforce customers. And that’s for good reason — the global regulatory landscape is rapidly evolving to adjust as autonomous AI becomes more ubiquitous. 

Agentforce, the agentic layer of the deeply unified Salesforce Platform, empowers companies to innovate with AI while adhering to responsible and compliant practices. This complete AI system features embedded guardrails, transparency and controls, responsible development, and flexibility to ensure that customers can stay ahead of changes as they roll in.

Agentforce, the agentic layer of the deeply unified Salesforce Platform, empowers companies to innovate with AI while adhering to responsible and compliant practices.

Here, Aliki Foinikopoulou, Senior Director of Global Public Policy for Salesforce, discusses the complex regulatory landscape surrounding AI, including how companies can balance AI innovation with responsible development, ensure trust and safety in AI applications, and the role of government regulation in shaping the future of AI.

Q: What global regulations or policies are you expecting in the year ahead?

AI regulation fundamentals will remain the same, even as agentic AI becomes ubiquitous. Regulations are meant to be agile to technological advancements as innovation moves faster than lawmaking. Many jurisdictions are working on policies and regulations to ensure AI tools as a whole are safe and secure for their constituents. 

Privacy remains a key focus, though regulatory activity in this space has slowed. Policymakers recognize that effective AI requires strong data governance, and we continue to push for robust policies that align AI regulation with data best practices.

Cybersecurity and digital trade are also top of mind. With a new Administration in the United States, a new European Commission, and elections in several countries including Germany, France, and Canada, we anticipate continued regulatory shifts in the year ahead.

At Salesforce, we’re deeply engaged in these conversations happening across the globe, and are advocating for responsible AI and innovation.

Aliki Foinikopoulou, Senior Director of Global Public Policy for Salesforce

At Salesforce, we’re deeply engaged in these conversations happening across the globe, and are advocating for responsible AI and innovation. For example, we worked with EU policymakers in support of the EU AI Act, and in 2024, signed on to Canada’s AI voluntary Code of Conduct

Q. With AI regulations evolving rapidly across different regions, how should organizations approach navigating this complex landscape?

The fundamental principles remain the same, even as AI regulations evolve. Like any new technology, companies deploying AI agents must first understand their intended use case, context, and potential risks — ensuring they mitigate harm and comply with relevant sector-specific regulations, such as those in healthcare or finance.

Companies should start with strong guardrails: Are they sourcing technology from trusted providers? Do these solutions meet safety and certification standards? At its core, compliance aligns with broader consumer protection principles — ensuring AI is fair, unbiased, and does not pose harm. 

We mustn’t forget: AI regulation is ultimately about protecting end users, whether through data privacy measures, liability considerations, or intellectual property rights. While these issues are still being debated by governments, it’s in everyone’s best interests to ensure customers and constituents are being protected.

Q. What policy considerations should companies be aware of when deploying agentic AI?

Liability should be a primary consideration. The increasing use of autonomous agents prompts questions about who’s responsible if something goes wrong, how damages will be compensated, and where individuals can find answers.

The other interesting point to consider is agents interacting with other agents. So we need to ask ourselves what rules, protocols, and standards need to be in place to facilitate agents talking to agents in a safe space. For example, if an agent from the U.S. talks to an agent in Japan and the regulatory regimes in these two countries are completely different, whose law governs the interaction? 

Another key policy consideration is defining the role of human oversight, especially in high-risk scenarios. Should there be mandatory handoffs to humans? At what point? This ties into a risk-based approach — different levels of AI autonomy require different levels of human involvement. Regulators will likely examine where and how humans fit into the AI decision-making process to ensure accountability and safety.

Q. What role should companies play in shaping policy around agentic AI? 

The average government official or regulator hasn’t yet interacted with a fully autonomous agent. They likely know what a chatbot is, and there are regulations requiring transparency with chatbots — for instance, requiring an organization to make clear that a customer is talking to an AI tool, not a human. But agents that can take independent action are new territory, so education is crucial. 

At Salesforce, we’re trying to show how these tools can have very simple day-to-day applications that make everyone’s life easier with very minimal or almost no risk or harm involved. The more we educate, the more we build trust, the more comfortable regulators become with AI technology.

At Salesforce, we’re trying to show how these tools can have very simple day-to-day applications that make everyone’s life easier with very minimal or almost no risk or harm involved.

Aliki Foinikopoulou, Senior Director of Global Public Policy for Salesforce

We’ve seen a similar evolution with privacy. As data collection online increased, so did concerns about personal data, eventually leading to regulations like the General Data Protection Regulation (GDPR). With AI, however, regulators are moving faster. While it took years for privacy laws to catch up to technology, AI policy conversations are evolving in real time. In both cases, companies like ours are leading the way by establishing safeguards and building trust before formal regulations take shape.

Q. How can companies balance the need for innovation with the responsibility of developing AI ethically?

Trust is essential for widespread technology adoption. Sure, you’ve heard that before. But it’s true: Users who trust technology are more likely to embrace it and use it responsibly. So companies must be responsible, transparent, and ethical as they innovate with AI to earn their users’ trust and ensure their technology can stand the test of time.

Look at the risk, make sure that every player in the value chain, from the developer of a model to the user of the agent, has appropriate obligations. As some companies develop their own models and some companies use others’ models, it’s important to define who’s responsible for what.

Regulators, especially in sectors like finance and healthcare, will demand responsible AI use. This incentivizes providers to prioritize privacy and security, aligning with business interests and consumer protection. 

Salesforce considers the regulatory landscape for both itself and its customers, ensuring data protection and user safety is top of mind at every juncture. We must ask ourselves and our customers questions such as “How are you using this technology? How are you leading with trust?” Those questions help guide all of us on this journey into the agentic AI future.

Go deeper:

Blog Article: Here

  • Related Posts

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    Blog Article: Here

    Leading Companies of All Sizes and Industries Are Transforming to Become Agentforce Companies

    AI is rewriting the playbook for how business is done, determining who wins and who loses. Today, there is a first-mover advantage for businesses across all industries to redefine operational excellence, unlock unprecedented growth, and do so with a far lower cost to serve and delight customers. The convergence of AI, digital labor, and intelligent […]

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    Leading Companies of All Sizes and Industries Are Transforming to Become Agentforce Companies

    Leading Companies of All Sizes and Industries Are Transforming to Become Agentforce Companies

    AI Agents Will Become the New UI, and Apps Take a Backseat

    AI Agents Will Become the New UI, and Apps Take a Backseat

    March Into Gaming With GeForce NOW’s 14 Must-Play Titles for Spring

    March Into Gaming With GeForce NOW’s 14 Must-Play Titles for Spring

    Telenor Builds Norway’s First AI Factory, Offering Sustainable and Sovereign Data Processing

    Telenor Builds Norway’s First AI Factory, Offering Sustainable and Sovereign Data Processing

    Agentic AI Leaders to Showcase Latest Advancements at NVIDIA GTC

    Agentic AI Leaders to Showcase Latest Advancements at NVIDIA GTC