The AI Paradox: Untangling Employee Hesitation to Unleash Agentic AI

Artificial intelligence (AI) and AI agents, systems that can perform tasks autonomously, are rapidly becoming a vital part of the modern worker’s toolkit. Yet research suggests that many workers worry or even feel guilty about using AI. A recent Slack survey of 17,000 desk workers found that although the vast majority (76%) say they want to become an AI expert, nearly half (48%) say they would feel uncomfortable admitting to their boss that they use it for common workplace tasks.

Workers’ main fears, according to the survey, are that they would be seen as “cheating,” “lazy,” or “less competent” if they admitted to their boss that they use AI at work. With 99% of executives planning AI investments in the coming year, how can companies foster an environment where employees are empowered and encouraged to use AI for efficiency, productivity, and other gains?

The ‘vacuum of uncertainty’

To address these fears, it’s important to acknowledge the rise of digital labor‌ — a term for the AI agents that work 24/7 to autonomously complete tasks — and its impact on the workforce. “Digital labor is fundamentally reshaping how businesses operate, and all of our jobs are changing,” said Lori Castillo Martinez, Executive Vice President of Talent Growth & Development at Salesforce. “I think it’s no surprise that people are feeling some of that uncertainty, whether it’s feeling like it’s cheating or feeling like they’re unsure of the accuracy [of AI tools], as all of it is new and different,” she added.

AI expert and journalist Chris Stokel-Walker, a lecturer and author of the book How AI Ate The World, observed there is “a vacuum of uncertainty” when it comes to the rules around AI and work. “There is a lack of a moral code that has been settled upon by society about whether or not we should use these things,” he explained, attributing this to a “deficit of understanding of AI by businesses and organizations.”

For many companies, he said, there is a lack of certainty, which leads to confusion. “There’s an awful lot of cloak and dagger stuff where there’s a lot of undisclosed use of AI, because people are worried that if a boss finds out, they will become a kind of pariah,” he suggested. 

Building a culture of transparency, training, and trust

Martinez believes fixing this is the first place managers should start. To help workers feel comfortable using AI, she advised creating an open and transparent culture where workers can learn through experimentation. She cited Salesforce’s quarterly “Agentforce Learning Days,” which are open and judgment-free days where workers can test out new AI tools with colleagues. The company holds one of these days every quarter, which also eliminates another common worry among workers: that they don’t have enough time to learn how to safely and effectively use AI.

Lucas Puente, Vice President of Product Research & Insights at Salesforce, whose team oversaw the Slack survey, agreed with Martinez. Effective use of AI tools and agents in the workplace, he argued, means bringing “AI out into the open and being really clear on how I, as a leader, am using AI, and how I encourage other people to do so as well.” 

Effective use of AI tools and agents in the workplace means bringing AI out into the open and being really clear on how I, as a leader, am using AI, and how I encourage other people to do so as well.

Lucas Puente, Vice President of Product Research & Insights at Salesforce

“We see in the data a really clear correlation between employees who feel trusted and employees who use AI,” Puente stated. “As a manager, I’m not going to feel like those on my team are ‘cheating’ if they’re using AI to help with specific types of tasks which have been agreed upon,” he said.

Puente revealed that he’s saved lots of time by using agents to help him categorize different responses in surveys, and to help him find verbatim examples later. These steps mean “I don’t really feel like they’re cheating in any way, and I’m very clear to my team and my own manager that this is how we’re using AI. It’s not like some kind of dirty secret,” he added, emphasizing the need to “establish a level of comfort so people don’t feel this sense of guilt.”

Being open with tasks given to AI tools

Martinez stressed the importance of being very specific about which tasks are assigned to AI agents, and like Puente, to be open about their use. In her role helping Salesforce’s 70,000 employees shape their careers and build their skills, she’s embraced AI tools to streamline various tasks, but only for specific and peripheral tasks. “The reality is that these tools are most effective when used in conjunction with human skills and judgment,” she noted. 

In her case, she uses Agentforce to save time collecting evidence for her team’s year-end performance reviews and to help her with her “V2MOM” process, a strategic planning and goal-setting framework developed by Salesforce CEO Marc Benioff. “Nobody’s performance ratings are going to come straight from AI, or from an agent,” Martinez explained, because those reviews rely on “multiple sources and experiences.” However, with Agentforce she can make more informed decisions more quickly, by tapping into the right context and data.

For example, when drafting her V2MOM, Agentforce automatically searched other leaders’ V2MOMs to identify common workstreams and areas to partner‌ with. Last year, this process was a lot more manual and included Lori individually searching and reading through hundreds of V2MOMs.  

Digging in further, Puente explained, “I think the question is not, ‘should you use AI yes or no, in a binary way,’ but being really nuanced about what are the specific jobs to be done, what are the specific use cases that you can leverage AI to become more productive?”

He likened giving a task to AI that it won’t do very well to an inexperienced human worker. “We’re not delegating the full [survey] analysis to AI. It’s just not that good at that yet. But it is really good at certain things, and [some] really narrow use cases, for example brainstorming survey questions or analyzing open-ended survey responses” he said. “Just like every human is going to have their own advantages or weaknesses,” Puente continued, adding, “You need to set up AI for success, the same way you think about setting humans up for success.”

Prescriptive when necessary

LLMs like ChatGPT — one of the most-used AI chatbot tools — provide direct answers, summaries, suggestions, and explanations in a conversational format more powerful than Google Search. However, LLMs are trained on a vast dataset of mostly publicly available content, which limits their utility for a business context. 

Additionally, ChatGPT uses the text inputted by users to train its models. To ensure proprietary data doesn’t leak into these models and get used for training purposes, users often have to opt out in the app’s settings — and need to be aware of when it’s inappropriate to enter sensitive text into an unsecured LLM.

This common paradigm is why Puente believes that having clear, strict rules about certain kinds of AI usage is essential. “Managers and company leaders need to be really prescriptive about the types of things that you should be using AI for, and maybe where we don’t feel as comfortable yet,” he cautioned, citing privacy and the use of sensitive data where using certain AI tools may be unacceptable, whatever the context. 

With that level of risk, mistrust of AI is understandable from employees, according to Raju Malhotra, Chief Product and Technology Officer at Certinia, a long-time Salesforce partner. “I think no one should be surprised that there is a level of distrust for such a groundbreaking new technology,” Malhotra noted. 

In addressing this mistrust, Malhotra stressed the need for transparency and robust safeguards: “This is very emblematic of a ton of new technologies that have come in the past. But I do think it is really a clarion call for all the technology providers, including Salesforce, including Certinia, to really take that potential risk very seriously and make sure that the data privacy, data trust, and the communication of how that data is actually going to be used is very clear.”  

This is very emblematic of a ton of new technologies that have come in the past. But I do think it is really a clarion call for all the technology providers, including Salesforce, including Certinia, to really take that potential risk very seriously and make sure that the data privacy, data trust, and the communication of how that data is actually going to be used is very clear.

Raju Malhotra, Chief Product and Technology Officer at Certinia

Salesforce was early to publish clear principles for the ethical and responsible use of AI, and its “Acceptable Use Policy” sets out clear guardrails for workers, which say that it’s not right to use AI for high-risk use cases without a human making the final decision. And Agentforce, the company’s own digital labor platform, incorporates the Einstein Trust Layer, which enhances accuracy by grounding AI responses in CRM data and mitigates harmful outputs with toxicity detection, so workers don’t have to guess — instead, they have guardrails built into the tools they’re using.

Humans with AI driving success together

For many workplaces, the binary question of whether or not to use AI has been answered. “We are on a one-way street … we’re not going to ever go back to having less AI,” Puente remarked. 

As is the anxiety-ridden question of whether or not AI will replace human workers. “I think sometimes the fear is that people are like, ‘is AI going to replace human workers?’ But I really think about it more as augmenting roles,” said Martinez. “Agentic [AI] isn’t about replacing people, it’s really about redefining how humans and AI collaborate together.” 

That means the pressing issue for businesses is not whether or not to start using AI, but about encouraging workers and managers to use them to multiply their impact. As we’ve seen, getting AI right means ensuring that workers have specific guidance and training from their employers about the appropriate use of AI tools. Only then will those worker concerns about cheating, laziness, or competence fully evaporate.

“You wouldn’t think of somebody as cheating if they use spell-check or somebody being lazy if they use Google Maps to find the store they’re looking for or something like that, right?” Puente concluded. “Those are technologies that are ingrained into our way of life and our way of working. AI is getting there, but it’s not quite there yet.”

Go deeper: 

Blog Article: Here

  • Related Posts

    The Agentic Maturity Model: A 4-Step Roadmap for CIOs to Succeed in the Agentic Era

    With 84% of CIOs believing AI will be as significant to business as the internet, strategic implementation is a competitive necessity. But many organizations aren’t sure where to start, how to scale AI agents, or how to measure success. To provide a structured path for CIOs and IT leaders, Salesforce developed the Agentic Maturity Model, […]

    Salesforce Brings Agentic AI to the Field Service Sector to Tackle Scheduling, Reporting, and On-the-Job Troubleshooting

    Agentforce for Field Service automates scheduling, paperwork, and reporting — freeing skilled workers to focus on repairs, troubleshooting, and building customer relationships With pre-built topics and actions, plus unified data from internal and external enterprise sources, Agentforce operates autonomously in any field service workflow to provide workers and customers with intelligent, data-grounded responses Businesses like […]

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Introducing sub-issues: Enhancing issue management on GitHub

    Introducing sub-issues: Enhancing issue management on GitHub

    9 business leaders on what’s possible with Google AI

    9 business leaders on what’s possible with Google AI

    What the heck is MCP and why is everyone talking about it?

    What the heck is MCP and why is everyone talking about it?

    The AI Paradox: Untangling Employee Hesitation to Unleash Agentic AI

    The AI Paradox: Untangling Employee Hesitation to Unleash Agentic AI

    Beyond CAD: How nTop Uses AI and Accelerated Computing to Enhance Product Design

    Beyond CAD: How nTop Uses AI and Accelerated Computing to Enhance Product Design

    6 highlights from Google Cloud Next 25

    6 highlights from Google Cloud Next 25