It has been a common refrain today that generative AI can take over simpler tasks but struggles with more difficult ones. In that case, how much does generative AI actually save time or improve performance at work?
Thomson Reuters, a professional services and technology company in the fields of law, tax, compliance and more, explored how professionals are using AI in its 2024 Future of Professionals report. We spoke to Thomson Reuters Chief Product Officer David Wong about generative AI in the workplace in an exclusive interview about the release of the report.
Thomson Reuters surveyed 2,205 professionals in legal, tax, and risk and compliance across the globe. The report didn’t specify generative artificial intelligence when it asked about AI, but the capabilities discussed in the report often relate to generative AI. In the conversation with Wong, we used AI as a general term for generative models that can draft images or text.
The report was largely bullish on AI, with artificial intelligence expected to free up time. And, 77% of respondents said they believe AI will have a “high or transformational impact on their work over the next five years”; that number increased by 10% from last year’s report.
“I was a bit surprised that the strategic relevance went up, because you could have thought last year when ChatGPT and GPT-4 hit the scene that the hype cycle would be at its peak and that people would be so excited,” Wong said.
However, interest in the strategic relevance of AI spread from mostly law firms to almost all of the industries Thomson Reuters serves. So, Wong said, the higher numbers might be reflective of broader interest across industries rather than intensifying interest from law firms.
Wong noted there’s an interesting split between people who are cautious and people who are ambitious in terms of generative AI. In the report, this shows up as a question in which Thomson Reuters asked: “In one year’s, three years’, and five years’ time, approximately what percentage of the work that is currently produced by your team do you believe will be [performed by either humans or AI].” The survey provided four possible responses — a spectrum of AI-led or human-led work — to determine whether professionals were cautious or ambitious about using AI tech for work. They found 13% of professionals fell under the “cautious” category, thinking a low percentage of work will be done by AI assistants even in five years’ time. The other extreme was the “ambitious” category, in which 19% of professionals predicted AI will do a large portion of their work by five years from now.
“A lot of professionals have realized what the practical implication, the reality of a lot of the technology is,” Wong said. “And based on that experimentation that happened over the last 12 months or so, we’re now starting to see those professionals translate the experimentation into implementation.”
Expectations for generative AI were very high in 2023 but will likely dip again before leveling off, according to Gartner.
For legal professionals and the other jobs covered in the Thomson Reuters report, “AI solutions are extremely good at any type of task where you can provide, frankly, a pretty good set of instructions,” said Wong.
That type of task includes research, summarizing documents or “Researching high level concepts that don’t require specific legal citations,” as one respondent to the report said.
What AI can’t do is make decisions by itself. AI companies want it to eventually be able to do this; in fact, carrying out actions autonomously on a user’s behalf is level 3 of 5 on OpenAI’s new AI capability rankings. But AI isn’t there yet, and Wong pointed out that for Thomson Reuters’ industries this issue is a matter of both the tech’s capabilities and the trust people put in it.
SEE: A modern enterprise data organization needs the right human team members to thrive.
“I think that AI has really not been able to get to a point, in terms of at least trust, to be able to make decisions by themselves,” Wong said.
In many cases, Wong said, AI “don’t perform as well as human reviewers, except in the most simple things.”
According to the report, 83% of legal professionals, 43% of risk, fraud and compliance professionals and 35% of tax professionals think “using AI to provide advice or strategic recommendations” is “ethically … a step too far.”
Most respondents (95% of legal and tax respondents) think “allowing AI to represent clients in court or make final decisions on complex legal, tax, risk, fraud and compliance matters” would be “a step too far.”
“If you ask the question ‘how likely do you think AI would make the right decision or as good a decision as a human?’ I think the answer would actually be potentially different versus ‘is it ethical?”’ Wong said.
Despite these qualms, Thomson Reuters made a bold claim in the report that “All professionals will have a genAI assistant within five years.” That assistant will function like a human team member and perform complex tasks, they predicted.
Wong pointed out that some of the optimism comes from pure numbers. The number of companies offering AI products has skyrocketed in the last two years, including the gargantuan smartphone makers.
“Pretty much everybody that has an iPhone15 and above and iOS 18 is gonna have an AI system in their pocket,” said Wong. “And I’m sure that in a couple more years, in every new version and every single Apple device you’ll be able to get access to that assistant. Microsoft has also been really aggressively rolling out Copilot. I think it’ll be, in a few years, pretty hard to have a version of Microsoft 365 which doesn’t have Copilot.”
SEE: Learn everything you need to know about Microsoft Copilot with the TechRepublic cheat sheet.
As well as potentially using AI to create, analyze or summarize content, organizations are considering how their product or production process might change using AI. According to the report, most C-suite respondents see AI most strongly influencing their operational strategy (59%) and product/service strategy (53%).
“I think that’s where pretty much every single company is looking at right now, which is the operations of one business has a lot of routine, rote work that you could describe with an instruction manual,” Wong said.
Those rote tasks are ideal for AI. In the legal field, he said, AI could change businesses’ process for submitting regulatory or statutory filings.
Respondents to the report had varying ideas about what constituted responsible use of AI at work. Many considered data security a key part of responsible AI use. Others valued:
“If anyone says that [generative AI is] perfect, hallucination free, with no errors, then they are either deluded or the claim should be highly, highly scrutinized,” Wong said. “What you want, though, is you want to have transparency into the performance.”
Responsible AI systems used in professional settings should be grounded in validated content, be measurable and be able to cite their references, he said. They should be built with safety, reliability and confidentiality in mind.
ChatGPT is “the worst poster child for a generative AI solution for professionals because it doesn’t satisfy those needs,” Wong said. “But you can in fact design a ChatGPT which is privacy safe and respects confidentiality and does not train on data. Those are design choices in the systems. They are not inherent to AI.”