Head of AI is a trendy new job title, but do businesses actually need someone in this role? What responsibilities does it entail? We asked Beena Ammanath, executive director of the Deloitte AI Institute, leader of Trustworthy Tech Ethics at Deloitte and author of the book “Trustworthy AI: A Business Guide for Navigating Trust and Ethics in AI.” A head of AI is an important role that requires a person with business and tech acumen, she said.
A chief executive officer, chief technology officer or chief information officer might hire for the role of head of AI, depending on the digital maturity and size of the organization. Ammanath pointed out that the role of head of AI covers a wide variety of technologies, including machine learning and data science, not just generative AI.
This interview was edited for length and clarity.
Jump to:
Megan Crouse: When someone is hiring an executive AI specialist, what should they be looking for? What is the job description, and how do people match it?
Beena Ammanath: Unlike the traditional roles of chief marketing officer or even chief technology officer and chief information officer, [the role of head of AI] depends on the maturity of the organization, of what they want to do with AI [and] where they are today.
Where do they see AI playing a role? I’ve seen two categories. One is the organization is so new to AI that they are trying to figure out how to bring AI into their business, into their core, whether it’s the core processes or new revenue opportunities [or] new product ideas. You would look for somebody who has depth of knowledge in AI but also has the business acumen.
Ideally, you would want somebody who has at least the domain expertise of the domain that your business is in. So if you are a healthcare company trying to look for a chief AI officer, it would be great to bring in somebody who knows AI but has some experience working with healthcare companies.
SEE: Budgets for data teams are tight, leading to under-equipped data leaders and teams and high turnover (TechRepublic)
You are looking for this person to truly understand your business and bring the most value from AI, whether it is for cost savings or new revenue opportunities, and you want somebody who also understands businesses. Someone who probably has a business degree, so that you can bring in that business acumen as well [so they can] say, “This is where AI would be most beneficial for this organization.”
If an organization is extremely mature in their AI journey or is an AI-native company, then you are also looking for somebody who has a deep understanding of the technology. Either this person has himself or herself done research with AI, is very much plugged into the open source community, or upcoming research papers or areas of AI research, or who has that deep technology knowledge.
So, if your company is very mature or an AI-native company, you probably would also be looking for a deep AI technologist.
Megan Crouse: How long have titles like head of AI been in conversation in the tech world?
Beena Ammanath: It’s an evolution like several other digital-native roles that have evolved. The first iteration of [head of AI] was probably chief data officers, when companies realized that there’s value in data. Then came data science. There are certainly chief data science, or chief data and analytics officers, or heads of data and analytics.
Over the last 10 years or so, that was the prominent title; now some of those same titles have morphed into AI. A lot of it has to do with the growth in AI as a technology itself beyond just big data and machine learning to other attributes of AI like large language models or text generation, image creation and so on.
I think the head of AI or chief AI officer has been in the vernacular for at least the past couple of years.
Megan Crouse: Should a head of AI role cover everything from engineering to user-focused decisions?
Beena Ammanath: Yes, or they should have a team structure set up. It’s a step towards failure if you’re trying to hire a unicorn who has deep tech knowledge and who also has deep business knowledge and who also has deep domain knowledge. It’s extremely unlikely to find that kind of a person who has that depth.
So [it is about] finding the person who [has] the basic tenets of leadership, who can collaborate, who can bring these different skill sets together [and] who understands the gaps in her own knowledge. And can augment with the right leadership role under her to make sure those gaps are filled.
You do not need to be an expert at all three to be the chief AI officer, but you should be cognizant of your gaps and surround yourself with an executive leadership team with the [right] skill sets.
Megan Crouse: Either at the enterprise size or at small and medium businesses, are more of these roles opening up? Do you think companies are going to be continuing to talk about the head of AI role in the next few years?
Beena Ammanath: Yes, absolutely. Over the next few years, for sure. No matter the size of your organization, I think there needs to be a focus on AI because there’s so much potential that this technology brings that it will help to have a head of AI, a senior leader looking at the powerful innovative ways this technology can come into your business and make an impact.
Over the next few years, AI is going to have a tremendous impact. Obviously, there is a positive impact — the business value creation — but it could also have negative impacts to your business.
Make sure that you are thinking about the value and the risk of the technology and have a senior leader who’s looking at both aspects and making those business decisions on when and where AI should be used, what kind of guardrails need to be put in place, [and] how do you leverage the best of the new technologies coming at us while also being aware of new AI regulations.
SEE: AI regulation in the US is an ongoing effort, including a September meeting between tech leaders and some members of the U.S. Senate. (TechRepublic)
Having a focused leader on AI is going to be crucial, whether you are a small company or a large company, because today every company is a tech-enabled company.
Megan Crouse: Can you talk a little bit more about the risks you mentioned, in terms of AI leaders needing to look out for those things?
Beena Ammanath: There are obviously the ones you hear a lot about: bias and fairness. So if you use human data in AI, then you should absolutely be checking for human fairness and bias in your algorithm and mitigating those, as well as the need for transparency, how the algorithm functions. It depends on when and where and how the AI is being used.
What’s the accepted level of fairness or transparency? Do you really want to know why a map is recommending you a certain path versus when AI is recommending a certain diagnosis for symptoms you might have? The level of priority will depend on the use case or the impact on data privacy. How do you make sure you have the right permissions to use [data] in the way you want to use it?
Megan Crouse: What other ethical considerations should be in place when making executive decisions about the use of generative AI? Which people from, which job titles and maybe from which interest groups should be involved in the decisions made by the head of AI or a similar title?
Beena Ammanath: You need a multi-stakeholder group, almost a committee, [which is] cross-business and cross-function. So if you have multiple product lines or business lines, the leader from each of those businesses should be present, but also from your various business functions [groups]. Because generative AI will have an impact not just on your businesses, but also in your finance team and your talent team, in your HR team, in your marketing team.
Having a cross-business and cross-functional stakeholder team who is evaluating and prioritizing the generative AI use cases that come in is going to be crucial to use generative AI most effectively in a responsible, compliant manner.