Achieving a Trusted AI Ecosystem: Salesforce Report Offers a Roadmap

Agentic AI is capturing headlines, but most are missing the key word – trust.

It’s true that integrating the latest AI advancements is crucial for building a limitless digital workforce. In fact, over 90% of enterprise IT leaders have implemented or plan to implement AI agents in the next two years. But as businesses race to bring agentic AI to market, evaluating the risks and opportunities of the technology must happen in tandem with innovation in order to be successful.

Trust is Salesforce’s #1 value, and through eras of predictive, generative, and now agentic AI, it has stayed at the forefront of our work.

Paula Goldman, Chief Ethical and Humane Use Officer, Salesforce

Trust is Salesforce’s #1 value, and through eras of predictive, generative, and now agentic AI, it has stayed at the forefront of our work. That’s why, as AI agents become embedded in the enterprise, Salesforce’s Office of Ethical and Humane Use is providing a comprehensive overview of the company’s responsible AI efforts and learnings in our first-ever Trusted AI Impact Report.

A look into the report

Enterprise AI agents have the potential to transform entire industries and everyday life. By openly sharing our approach, Salesforce aims to raise the bar for responsible AI and empower others to advance the technology in a way that is ethical, transparent, secure, and accountable – ultimately creating a more trustworthy digital ecosystem. 

Salesforce’s Trusted AI Impact Report describes our approach to designing and deploying AI products, from initial conception and implementation to ongoing assessment and improvement. We hope it will guide organizations on their own processes while keeping trust at the center. 

The process starts with putting the proper governance in place.

Trusted AI principles and decision-making structures: We develop responsible principles and decision-making structures from the onset.

  • Salesforce developed its first set of trusted AI principles in 2018. As we entered into the era of generative AI in early 2023, we augmented our trusted AI principles with a set of five guiding principles for developing responsible generative AI, as one of the first enterprise companies to publish guidelines in this emerging space. These principles still hold true for the current, third era of AI — the era of AI agents.
  • In addition to principles and policies, we put strong decision-making structures in place to ensure the responsible development and deployment of AI. Some of these structures include reporting to a committee of Salesforce’s Board of Directors, reviews and consultations with our AI and Ethical Use Advisory Councils, and senior leadership engagement.

Ethical use policies: Our implementation of new technologies like agentic AI is thoughtful, and we provide rules on how customers can and cannot use our products.

  • We developed an Acceptable Use Policy (AUP) and AI Acceptable Use Policy (AI AUP) to ensure customers can use Salesforce products with confidence, knowing they and their end users are receiving an ethical AI experience from product development to deployment.

Product design and development: We operationalize trusted AI into product design and development.

  • Agentforce is also built on the Einstein Trust Layer, a comprehensive framework within Salesforce’s AI ecosystem, designed to uphold data privacy, security, and ethical standards while enhancing the effectiveness of AI applications. 
  • To build trustworthy agents, Salesforce implements guardrails across our AI products called trust patterns that are designed to improve safety, accuracy, and trust while empowering human users. 
  • We also ensure accessibility considerations are integrated into product design and development from the onset through our Self-Service for Engineering initiative. This resource provides foundational guidance and real-time support. 

Testing, evaluation, and assessment: We implement testing mechanisms to continually evaluate and improve the safety of AI products.

  • Model benchmarking, the act of evaluating AI models against trust and safety metrics (e.g., bias, privacy, truthfulness, robustness), enables Salesforce to ensure that its products perform at the highest level. When a model scores below a certain range on one of our metrics, we use adversarial testing to evaluate the system by feeding it deceptive inputs to uncover weaknesses and improve robustness.
  • Ethical red teaming is another tactic we implement to improve the safety of AI products. Based on the results of our benchmarking, we perform robust AI red teaming for toxicity, bias, and security to make sure that if any malicious use or benign misuse occurs, our systems are safe.

Building a safe digital ecosystem

As agentic AI progresses, businesses need to adapt. The continuous sharing and implementation of ethical frameworks will be critical requirements for digital ecosystems to innovate in safe and trusted ways. With trust as its core value, Salesforce is dedicated to driving positive change through AI, and will continue leading the way to help businesses across the industry embrace a similar commitment.

Dive deeper

Blog Article: Here

  • Related Posts

    Beyond Lines of Code: Redefining Developer Productivity and Purpose in the Agentic AI Era

    It’s hard to imagine an industry that hasn’t been affected by the rise of generative AI and autonomous AI agents. If you haven’t felt it yet, just wait. You will. But few professions have been transformed as dramatically as software development. AI agents are changing how applications are conceived, designed, and deployed. And that has […]

    Leaders Race to Bridge ‘AI Trust Gap’ for Wary Employees

    A silent standoff is brewing in the corporate world. While executives champion the efficiencies of agentic AI, a wave of employee skepticism threatens to derail its rollout. Nearly two-thirds of C-suite executives say trust in AI drives revenue, competitiveness, and customer success. However, more than half of workers say it’s difficult to find trusted AI […]

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    How we’re helping Google Play developers deliver better user experiences through improved performance insights.

    How we’re helping Google Play developers deliver better user experiences through improved performance insights.

    Cracking the code: How to wow the acceptance committee at your next tech event

    Cracking the code: How to wow the acceptance committee at your next tech event

    How to make your images in Markdown on GitHub adjust for dark mode and light mode

    How to make your images in Markdown on GitHub adjust for dark mode and light mode
    AWS Weekly Roundup: Amazon EKS, Amazon OpenSearch, Amazon API Gateway, and more (April 7, 2025)
    AWS Weekly Roundup: Amazon S3 Express One Zone price cuts, Pixtral Large on Amazon Bedrock, Amazon Nova Sonic, and more (April 14, 2025)

    4 Fitbit features I’m using to become a more efficient runner

    4 Fitbit features I’m using to become a more efficient runner