Enable javascript in your browser for better experience. Need to know to enable it? Go here.

How do we put the human at the center of AI?

A Q&A with Tiankai Feng

The hype around AI — generative, agentic or otherwise — is such that it can be easy to lose sight of what the point of the technology is: to help people. Indeed, that, really, is the point of any technology — to improve people's lives.

 

That's something Thoughtworker Tiankai Feng, Global Head of Data and AI, is keen to emphasize to the world and to businesses in particular. In July 2024 he published his first book Humanizing Data Strategy, which explored how to approach and leverage data in a way that puts humans first. 

 

Now, just a little over 12 months later, he's written a follow up: Humanizing AI Strategy. The book covers how to build and implement an AI strategy that is sustainable, effective and ultimately human. 

 

But what does that all mean in practice? Tiankai was kind enough to speak to me and give me an insight on the thinking behind the book, how it approaches the topic and his perspective on AI today more broadly.

Richard Gall: Why do we need to humanize AI strategy? Your title implies there’s an inhuman(e) AI strategy — what does that look like and what’s the problem with it?

Tiankai Feng: A humanized AI strategy is, for me, emphasizing a focus on human values and needs in contrast to only looking at the technological advancements. If we don’t consider the implications of how employees will interact with AI tools, how we should and shouldn’t use AI for customer interactions and the potential positive and negative impact it could have on society, we might cause damage that may be very hard to recover from. 

 

Following my previous book Humanizing Data Strategy, I especially want to advocate for learning from our past mistakes and experiences in data management and to put principles and guardrails in place proactively. 

 

When it comes to AI, I want all of us working on AI initiatives in organizations to not only ask “can we do it?” but also “should we do it?” and “how can we do it right?”

RG: The book builds on stories of AI project failures. Could you give a couple of examples and how you think they could have been avoided with a more human approach?

TF: AI project failures all have one thing in common — they prioritized speed over diligence and guardrails. I have three potential starting points to avoid more failures:

 

  • Embed “right human, right time.” Be diligent about setting up trigger points and decision nodes for when involving the “human-in-the-loop” is mandatory, especially when it comes to regulatory, ethical and commercial risks.

  • Bring in cross-functional expertise. AI experts don’t have expanded knowledge on all the risks around AI applications, but involving experts from legal, DEI, cybersecurity and other relevant functions early in the development process can help spot those risks early to develop guardrails.

  • Rigorous testing. Just like pentesting in cybersecurity, more rigorous testing for malicious intent and other actions in an AI application can help identify vulnerabilities and prevent actual failures before it’s too late.

RG: What does the book cover and how did you decide what to include?

TF: The book covers everything that I believe is important to consider when it comes to human-beings working on and with AI. However, I made a conscious choice to not cover the “overall” aspects needed for an AI strategy — including, for example, technological choices.

 

Since technological advancements in AI are happening at a remarkable pace these days, I wanted to write a book that’s rooted in human values — hopefully making it more timeless and relevant to readers.

RG: Some might argue that the only humanized AI strategy is no AI: the impact on jobs, creativity, the environment all make AI anti-human. How would you counter that argument?

TF: As with any technological advancements, we need to have a healthy balanced relationship with it. Despite all of the risks for humanity that you mentioned, we also need to acknowledge the benefits AI can bring to us - like more advancements in medical research, more tailored and impactful education or minimization of manual repetitive tasks. The future I see is where human-beings and AI co-exist and collaborate.

Tiankai Feng, Thoughtworks
Since technological advancements in AI are happening at a remarkable pace these days, I wanted to write a book that’s rooted in human values — hopefully making it more timeless and relevant to readers.
Tiankai Feng
Global Director for Data and AI, Thoughtworks
Since technological advancements in AI are happening at a remarkable pace these days, I wanted to write a book that’s rooted in human values — hopefully making it more timeless and relevant to readers.
Tiankai Feng
Global Director for Data and AI, Thoughtworks

RG: What would you say are the most important elements of a humanized AI strategy?

TF: The core of my approach to a humanized AI strategy is the framework of the 5Cs, which are all rooted in human needs and traits: competence, collaboration, communication, creativity and conscience.

 

If we focus on these aspects across the AI application lifecycle and have them top of mind in our AI strategies, we can hopefully intuitively and effectively be more human-centric. 

RG: How do you build a solid AI strategy when things are changing so quickly? How do you have vision while also being flexible and adaptive?

TF: I believe in any strategy that’s related to technology to be directional, and not instructional — that means a certain target picture in terms of business drivers, operating model, architectures and processes should be provided, but there should be enough flexibility baked in to adjust to changes in technology, industry or society. 

 

It also comes down to a cultural element where not every single detailed decision needs to be made from top down, but that leadership provides vision and direction, while the people on the ground who know the operational elements the best help operationalize it by making autonomous decisions within the scope that’s defined.

RG: Who is Humanizing AI Strategy for? How technical do you need to be to read it?

TF: This book is for anyone who is working with AI or interested in working with it. Decision-makers in AI can hopefully get actionable advice here, but also anyone who is working with AI day-to-day can make conscious choices how to work with AI, but also provide feedback to those people who provide AI applications. 

RG: What lessons did you build on from your first book? What did you do the same and what did you do differently?

Humanizing AI Strategy book cover

TF: The feedback to my first book validated two things: that human-centricity is a topic that very much resonates with leaders and practitioners, and that my writing style makes complex topics understood and approachable. 

 

So I naturally doubled down on both these learnings. I also didn’t have any imposter syndrome anymore writing the second book; that meant it was a very smooth and enjoyable writing experience. 

 

While I completely wrote the first book without any AI support, I decided to try to use Gen AI to help me write this second one — basically trying to role model a good collaboration with AI — and it helped me structure my thoughts, critically review my content and help remove writers’ block at certain times. 

 

All in all, I’m very proud because I think the clarity of thought and my writing style improved compared to the first one. And aren’t learning and growth the best human traits?

Thanks to Tiankai for taking the time to answer my questions. Learn more about Humanizing AI Strategy.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Discover fresh perspectives and new ideas on the Technology Podcast