Achieving a Trusted AI Ecosystem: Salesforce Report Offers a Roadmap

Agentic AI is capturing headlines, but most are missing the key word – trust.

It’s true that integrating the latest AI advancements is crucial for building a limitless digital workforce. In fact, over 90% of enterprise IT leaders have implemented or plan to implement AI agents in the next two years. But as businesses race to bring agentic AI to market, evaluating the risks and opportunities of the technology must happen in tandem with innovation in order to be successful.

Trust is Salesforce’s #1 value, and through eras of predictive, generative, and now agentic AI, it has stayed at the forefront of our work.

Paula Goldman, Chief Ethical and Humane Use Officer, Salesforce

Trust is Salesforce’s #1 value, and through eras of predictive, generative, and now agentic AI, it has stayed at the forefront of our work. That’s why, as AI agents become embedded in the enterprise, Salesforce’s Office of Ethical and Humane Use is providing a comprehensive overview of the company’s responsible AI efforts and learnings in our first-ever Trusted AI Impact Report.

A look into the report

Enterprise AI agents have the potential to transform entire industries and everyday life. By openly sharing our approach, Salesforce aims to raise the bar for responsible AI and empower others to advance the technology in a way that is ethical, transparent, secure, and accountable – ultimately creating a more trustworthy digital ecosystem. 

Salesforce’s Trusted AI Impact Report describes our approach to designing and deploying AI products, from initial conception and implementation to ongoing assessment and improvement. We hope it will guide organizations on their own processes while keeping trust at the center. 

The process starts with putting the proper governance in place.

Trusted AI principles and decision-making structures: We develop responsible principles and decision-making structures from the onset.

  • Salesforce developed its first set of trusted AI principles in 2018. As we entered into the era of generative AI in early 2023, we augmented our trusted AI principles with a set of five guiding principles for developing responsible generative AI, as one of the first enterprise companies to publish guidelines in this emerging space. These principles still hold true for the current, third era of AI — the era of AI agents.
  • In addition to principles and policies, we put strong decision-making structures in place to ensure the responsible development and deployment of AI. Some of these structures include reporting to a committee of Salesforce’s Board of Directors, reviews and consultations with our AI and Ethical Use Advisory Councils, and senior leadership engagement.

Ethical use policies: Our implementation of new technologies like agentic AI is thoughtful, and we provide rules on how customers can and cannot use our products.

  • We developed an Acceptable Use Policy (AUP) and AI Acceptable Use Policy (AI AUP) to ensure customers can use Salesforce products with confidence, knowing they and their end users are receiving an ethical AI experience from product development to deployment.

Product design and development: We operationalize trusted AI into product design and development.

  • Agentforce is also built on the Einstein Trust Layer, a comprehensive framework within Salesforce’s AI ecosystem, designed to uphold data privacy, security, and ethical standards while enhancing the effectiveness of AI applications. 
  • To build trustworthy agents, Salesforce implements guardrails across our AI products called trust patterns that are designed to improve safety, accuracy, and trust while empowering human users. 
  • We also ensure accessibility considerations are integrated into product design and development from the onset through our Self-Service for Engineering initiative. This resource provides foundational guidance and real-time support. 

Testing, evaluation, and assessment: We implement testing mechanisms to continually evaluate and improve the safety of AI products.

  • Model benchmarking, the act of evaluating AI models against trust and safety metrics (e.g., bias, privacy, truthfulness, robustness), enables Salesforce to ensure that its products perform at the highest level. When a model scores below a certain range on one of our metrics, we use adversarial testing to evaluate the system by feeding it deceptive inputs to uncover weaknesses and improve robustness.
  • Ethical red teaming is another tactic we implement to improve the safety of AI products. Based on the results of our benchmarking, we perform robust AI red teaming for toxicity, bias, and security to make sure that if any malicious use or benign misuse occurs, our systems are safe.

Building a safe digital ecosystem

As agentic AI progresses, businesses need to adapt. The continuous sharing and implementation of ethical frameworks will be critical requirements for digital ecosystems to innovate in safe and trusted ways. With trust as its core value, Salesforce is dedicated to driving positive change through AI, and will continue leading the way to help businesses across the industry embrace a similar commitment.

Dive deeper

Blog Article: Here

  • Related Posts

    Salesforce Prescribes Agentforce for Health to Speed Time to Treatment and Improve Outcomes with Digital Labor

    Agentforce now includes prebuilt skills to streamline tasks like benefits verification, disease surveillance, and clinical trial recruitment, speeding time to treatment Partnerships with athenahealth, Availity, and Infinitus.ai will enable Agentforce to take action and expedite care approvals with a real-time view of patients’ coverage, clinical, and demographic data Industry leaders like Amplifon, Pacific Clinics, Protas, […]

    How Agentic AI Will Ease Healthcare’s Workforce Crisis

    Healthcare professionals are driven by a deep commitment to their patients, always striving to deliver top-notch care. Their dilemma? Healthcare workers are inundated with paperwork, according to new research from Salesforce, with 87% admitting they work late each week just to finish their administrative duties. This laborious work, as one might imagine, takes them away […]

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Salesforce Prescribes Agentforce for Health to Speed Time to Treatment and Improve Outcomes with Digital Labor

    Salesforce Prescribes Agentforce for Health to Speed Time to Treatment and Improve Outcomes with Digital Labor

    How Agentic AI Will Ease Healthcare’s Workforce Crisis

    How Agentic AI Will Ease Healthcare’s Workforce Crisis

    AI Agents Can Cut Healthcare Paperwork by 30%, Study Shows

    AI Agents Can Cut Healthcare Paperwork by 30%, Study Shows

    Precina Set to Deliver 24/7, Personalized Diabetes Care Nationally with Agentforce

    Precina Set to Deliver 24/7, Personalized Diabetes Care Nationally with Agentforce

    CUDA Accelerated: How CUDA Libraries Bolster Cybersecurity With AI

    CUDA Accelerated: How CUDA Libraries Bolster Cybersecurity With AI

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service