AI’s Human Factor: Why Trust Is Essential for Unlocking Digital Labor’s Potential

Can we trust artificial intelligence (AI) agents to work responsibly on behalf of humans?

Rob Katz, Salesforce’s responsible AI co-lead, believes we can, but only if we prioritize transparency, explainability, and human control in developing AI agents‌ — AI systems that can do anything from answering simple questions to resolving complex issues without human intervention. The implementation of agents within businesses is poised to revolutionize industries with a limitless supply of digital labor.  

Without building trust in agents among an existing workforce, however, Katz says companies will struggle to fully capitalize on these powerful new tools’ transformative potential.

Here, he shares his perspective on how companies can navigate AI’s ethical considerations, ensuring that innovation and responsibility go hand in hand. He also delves into Salesforce’s approach to building trust in agentic AI, including how systems like Agentforce, the agentic layer of the unified Salesforce Platform, help companies foster confidence in these new systems. 

Q. What measures will leading companies take to ensure that AI technology behaves as it’s supposed to?

The key is a human-AI partnership. Trust in digital labor is going to be important to organizations working with digital workers. You wouldn’t work with a digital agent that you don’t trust. 

You wouldn’t work with a digital agent that you don’t trust. 

It comes down to three things: transparency, explainability, and control. Companies will try to earn that trust by ensuring that their digital labor forces are transparent, explainable, and manageable.

Q. Will humans always be in the loop in these cases?

Not for every single intermediary decision. 

For example, if I’m trying to schedule an appointment with a service provider, two AI agents could work together to figure out calendar conflicts and navigate different proposals. From my perspective, this is an action with clear guardrails in a controlled environment, so I don’t need to see the sausage being made there. 

If it does something outside of those parameters, though, I want to be able to understand why and I want to be able to change my settings. That’s the transparency, the explainability, and the control. 

There’s a world in which there isn’t a human in the loop for every single thing, and that world is coming very soon. But humans will continue to have a critical role to play in managing these systems, and will continue to be ‘in the loop’ on legal or other similarly significant decisions.

Q. How can companies successfully build trust in their AI implementations?

Telling your stakeholders what to expect is critical. Salesforce and other leaders in the trustworthy AI space were early to publish principles for the ethical and responsible use of AI. We augmented our trusted AI principles with a set of five guiding principles for developing responsible generative AI – which still hold true for the current era of AI agents.

The second is defining clear policies, which outline how you can and can’t use a company’s AI. For example, Salesforce’s AI Acceptable Use Policy says you can’t use this AI for high-risk use cases without a human making the final decision. 

Companies fostering trust in their AI are also thinking proactively about the potential harm that could result from the use of those AI systems and actively creating controls and mitigations that reduce the chance that harm might occur.

Q. What are some of the ongoing feedback mechanisms that Salesforce is using to build trust in an agentic AI system?

First is transparency. We have a really strong standard set of disclosures about the AI you’re interacting with, or that you’re about to use. This is so we’re always clear with our customers, and with ourselves, about when AI is being used. These have been developed with our user experience and content experience teams, and they’re standard and applied universally across our products. 

That’s explainability, and ensuring those citations are robust and useful is a really important way to earn trust and have that feedback mechanism right inside the tool.

The second is explainability. That’s why we’re working hard to allow the inclusion of, and building the product around, citations. A citation is exactly what it sounds like‌ — ‌it’s not like your high school or university term paper, but it’s the same concept: What’s your source? Where did this come from? That’s explainability, and ensuring those citations are robust and useful is a really important way to earn trust and have that feedback mechanism right inside the tool. 

Slack has a great example. You go to Slack’s AI Recap, and it says, ‘Hey, here are the things that happened in this channel.’ Then you can click into the actual message – the root of that summary. You can go and see the full message. That’s a simplistic, and very useful, type of citation. 

Q. How do you ensure that guidelines and regulations for responsible product development don’t get in the way of innovation?

In a lot of ways, it’s a false dichotomy. The adoption curve of AI is only made possible with trust, and trust requires clear rules of the road. Done right and if trust is the north star, whether it’s laws or regulations or company policies, those rules create an atmosphere of trust by saying, ‘Here are the agreed upon rules of the road as we go on this journey together.’ 

That allows innovation to flourish because you know the rules of the game before you play the game.

Q. What do you think is unique about Salesforce’s position on trust?

Since our inception, we’ve asked customers to entrust us with their valuable customer data, securely storing it in our cloud. This was a pretty revolutionary concept in 1999 when we were founded — and we’ve remained committed to that responsibility ever since. As a result, we became leaders in trust.

Today, trust, data ethics, and data privacy remain deeply ingrained in Salesforce’s DNA. So it was a natural extension to apply these same ‘trusted AI’ principles to our AI development. While we continually strive for improvement, our culture fundamentally recognizes that trust is essential to our business’s (and our customers’) continued success.

Go deeper:

Blog Article: Here

  • Related Posts

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    Blog Article: Here

    Leading Companies of All Sizes and Industries Are Transforming to Become Agentforce Companies

    AI is rewriting the playbook for how business is done, determining who wins and who loses. Today, there is a first-mover advantage for businesses across all industries to redefine operational excellence, unlock unprecedented growth, and do so with a far lower cost to serve and delight customers. The convergence of AI, digital labor, and intelligent […]

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    UFL Leverages Salesforce’s Agentforce To Provide World Class Service

    Leading Companies of All Sizes and Industries Are Transforming to Become Agentforce Companies

    Leading Companies of All Sizes and Industries Are Transforming to Become Agentforce Companies

    AI Agents Will Become the New UI, and Apps Take a Backseat

    AI Agents Will Become the New UI, and Apps Take a Backseat

    March Into Gaming With GeForce NOW’s 14 Must-Play Titles for Spring

    March Into Gaming With GeForce NOW’s 14 Must-Play Titles for Spring

    Telenor Builds Norway’s First AI Factory, Offering Sustainable and Sovereign Data Processing

    Telenor Builds Norway’s First AI Factory, Offering Sustainable and Sovereign Data Processing

    Agentic AI Leaders to Showcase Latest Advancements at NVIDIA GTC

    Agentic AI Leaders to Showcase Latest Advancements at NVIDIA GTC