Brief summary
Everyone knows an AI strategy is important — but how do you build one with humans at the center? That's a question Tiankai Feng, Thoughtworks Global Director for Data and AI Strategy, has been pondering ever since the publication of his 2024 book Humanizing Data Strategy. Now, just over a year later, he's outlined his thinking in a follow-up, Humanizing AI Strategy. With the subtitle "leading AI with sense and soul," it's a practical and thoughtful guide aimed at helping the industry rethink the way AI is embedded and leveraged across organizations.
In this episode of the Technology Podcast, Tiankai joins host Prem Chandrasekaran to discuss his new book. He explains why he wrote it, how it compares to his first book and discusses the framework it puts forward. Listen for a fresh perspective on AI in business and some practical strategies for leaders to bring purpose and conscience to AI initiatives.
Learn more about Humanizing AI Strategy.
Read a Q&A with Tiankai.
Prem Chandrasekaran: Hello, everyone. Welcome to yet another episode of the Thoughtworks Technology Podcast. My name is Prem Chandrasekaran. I'm one of the regular hosts on the podcast. Today, I'm joined by Tiankai Feng, who is a Thoughtworker, who has written two books now, one on Humanizing Data Strategy, and then the latest book that he's written, which just released a month ago on Humanizing AI Strategy. That's going to be the topic of our conversation today. Hello, Tiankai. Can you please introduce yourself?
Tiankai Feng: Absolutely. Hello, Prem. It's great to be here. I'm Tiankai. I currently work as a Director for Data and AI Strategy on the global side. That means that we basically look into approaches and solutions for strategic and governance problems around data and AI. I have a big passion for the human side of data and AI because I believe that most of the issues that we have in data and AI are rooted in human factors, like bad decisions, lack of knowledge, bad collaboration, all the things, finally, lead to things not working. I basically decided to write not one, but even two books about it.
Prem: Wonderful. Let's get straight into it. Second book in about a year's time, that's a pretty rapid pace, at least going by standards that a few of us have set for ourselves. What motivated you to write Humanizing AI Strategy?
Tiankai: I think a few things. The first one was that when I was writing a book on data strategy before on the human side of it, I realized a lot of the elements that I was writing about had more of a relationship to AI, especially when it came to what it means to have AI ready data, for example, what to use data for, and where the value of data lies. Nowadays, the context is all around AI. The other aspect was that I felt compelled to write it faster rather than slower was, because I realized that when we talk about AI and doing it in a human-centric way, I don't want the world to wait to make mistakes again, just like we did with data, but that we can do it right from the beginning.
I should have an opinion on this sooner than later to actually learn from the past of what we did not do well in data strategy and data management to now do the right things from the get-go when we do with AI. Lastly, I feel like data and AI goes hand in hand as well. Only good data can lead to good AI, and all these things are connected. If we don't also think about that way of how human centricity spills over from data to AI, then we are basically making a mistake. I really wanted to do this in a quicker time.
Prem: It looks like the central theme or the central message in the book is to try and explore the human side of building an AI strategy. Is that how you would characterize it?
Tiankai: Absolutely. Yes.
Prem: Your prior book on Humanizing Data Strategy, now, does the new book build on top of the contents that you talked about in your previous book, or is this something very unrelated? It does look you've got the humanizing element in both, firstly with data and now with AI. Can you tell us a little bit about that?
Tiankai: Absolutely. I think the two of them are definitely connected, mostly because they're both using the same framework of the five Cs, as I call it, but other than that, there's not that much overlap. I think that basically the AI strategy book can be also read in isolation because all the needed references are made through the first book are also already written down explicitly also in the second book, so there's no really need to read the first one to be able to read the second.
Prem: Great. You just touched upon this thing, which you're calling the five Cs framework, and you elaborate quite a bit on that in the book. Can you tell us what exactly the five Cs stand for and what it means?
Tiankai: Yes. Absolutely. The five Cs basically are rooted in human needs and human traits that we all have, and that's why they shouldn't sound so groundbreaking. It's more of a vehicle to remember what to intentionally drive when it comes to AI. Those five Cs are competence, collaboration, communication, creativity, and conscience. To each of those five Cs, I basically have a few elements and a few ideas of how we can make it more human-centric.
Prem: The first element of the framework, which is competence, so how do you see roles evolving in organizations, human AI, or hybrid AI? What exactly do you see in terms of these roles and how they evolve?
Tiankai: Absolutely. It basically touches the narrative that's going around a lot, which is that AI is replacing people. I think there is a little bit more nuance to that because it cannot be that absolute statement that AI is just going to replace specific people, because we need to actually break roles down into tasks. Because with every task, there are tasks that can be automated, and there are tasks that shouldn't be automated because they require critical thinking. Those that are more manual and repetitive, of course, we should be able to automate them. This also means that if we are using AI to actually automate certain manual repetitive tasks, then a lot of free time will show up. At the same time, because of all of the AI that we are going to run and maintain, we need new people with new skills to actually run what is happening. For example, those that actually have been doing a lot of manual data management work, for example, or knowledge workers that were just managing very hands-on the information that we had everywhere.
If their manual repetitive tests are gone, then we need a lot more people who are actually ensuring that we have quality assurance, that ensure we have guardrails, that ensure that we have humans in the loop when things are escalating. We need also people that basically help design the right AI agents or AI applications going forward as well. I think a lot of ways the hands-on people, they will evolve to become managers, and supporters, and enablers of AI initiatives going forward, and all of the tasks I think related to AI will also evolve in a way that we might not be able to predict yet in what way it will go.
Prem: The next thing that you talk about is collaboration. Here is the thing that has always been a bit of a thing for me in terms of deciding what I would like the AI to do versus what I would like to leave for the humans to do. In any AI-based augmentation, how do you decide how this collaboration model works? Is there a method to this madness?
Tiankai: Yes, I would say so. Generally, what you are pointing to is a very good one because I don't think a lot of organizations are actually going through that exercise of defining clearly what are human-only tasks and what are potential AI tasks. It's rather, "Let's try everything AI first and then let's see what happens," which often can actually backfire, and then suddenly we are really regretting and we are not able to clean up anymore what the mess was that we made.
Having that point of view, actually, of what is human-only and what is AI potential, that is a clear thing that we can all think about more. To your point, I think basically to derive what is there, whenever I think we need actually human judgment, and it's actually a complicated aspect of decision making that we cannot put into clear roles, but where we actually need to apply some of our more human expertise that we have, these are the spots where I think it is going to be very difficult to actually let AI do it or delegate it to any AI.
The thing about it is also that when we think about elements like our moral compass or, let's say, our gut feeling, those are things that we know exists and we all can relate to what that is, but we are not able to break it down into how it exactly works. If we cannot even break it down how it exactly works, how can we expect AI to do it? In a lot of ways, AI is only imitating and learning from our outputs, but not our process of decision-making.
We can only basically train it with what looks like the right thing and what's possible, but not really imitated exactly how our brain works yet. This is why I think that if we pay more attention to the tasks that need critical thinking and moral behavior, then we should stick it still to human beings, while other things that are maybe less stakes or a bit more protective, they can be done by AI.
Prem: If I were to think about it a little bit more simply, I think what you seem to be saying is that, if anything requires empathy, then you probably want the humans to do it, whereas if something is repetitive, then you probably want the AI to do it.
Tiankai: Exactly.
Prem: That seems like a pretty good way to think about it. There are things that these days AI can do which are very, very creative. I can tell you that there are elements of creativity that AI brings to the table that I would not have thought about. For example, I was building this presentation, and I needed a bunch of telling but related visuals. Subtly funny, and I used AI to create those visuals. We definitely can take a look at that. Can you unpack that a little bit in terms of how one can go about doing this in a responsible way?
Tiankai: Absolutely. Maybe just to address the elephant in the room is that, even when we think about generative AI creativity, it's actually still based on other people's intellectual property nowadays. Because it's trained by so many other funny people, for example, in your case with the jokes, that have John Wall of great comics and cartoons, now, basically it's being mixed and matched and reiterated by AI. It's still heavily influenced by whatever it’s learned from. That itself, of course, is tricky because as we all know, not everyone has given the consent for this to be actually used for training. There's a lot of gray area there, polarizing things going on there. Nonetheless, I think that when it comes to creativity generally, it's a very interesting thing to think about what the role of human creativity versus machine-generated creativity is.
I believe that human beings have what I call 'disruptive creativity', where we actually have really truly new original ideas, whereas AI is only as good as the data is trained with, which means it's only incremental creativity. It only reiterates on what it already learned from to then, incrementally, create new things. That means also that if we would just hypothetically stop any human creativity and only let AI generate new things, and then it learns from the generator stuff on AI, again, everything would become bland.
Recursive training happens, and then things would just qualitatively become really, really bad. We almost need and require human creativity to actually help AI to stay as creative as it today is. We need to stay even more human than ever before to have actually have those original thoughts and ideas to actually be able to make AI creative as we want it to be right now.
Prem: Which actually leads me to the next element about communication. One thing that a lot of folks struggle with is, is how to approach transparency. How do you make the work that AI has done visible and valuable, especially if you've got non-technical stakeholders? Is there something that you can talk about there?
Tiankai: I think it's always the balance between, how much transparency do I need to additional generate and how much additional workload this creates to document things, versus what is more efficient and makes us faster to actually run AI applications in the right way? Often it's also like this, that only when something goes wrong, suddenly transparency is really important because if everything goes right, nobody actually cares.
Finding that balance as in, we need to, anyway, be transparent because oftentimes it's not fast enough to then reverse engineer and see what happens, which in AI is anyway sometimes impossible to actually when it happens and to actually do the right thing. I think, basically, having a minimum level of automated documentation whenever we do AI applications would be the right start.
Even if we then create a little bit of a Meta AI agent or a Meta AI assistant that helps us to actually maintain the documentation, and transparency, and expandability, then that would be really helpful. As with metadata management we have in data, we always think about it a little bit too late. If we could actually think about doing these transparency and sustainability efforts from the very get go, then I think afterwards it might be easier for us to then just try to understand what we did, and how we can actually also find the root cause in issues.
Prem: Which then leads us to the element of conscience. How do you embed principles, accountability or the notion of the moral compass into AI systems and not just bolted on as an afterthought?
Tiankai: With, actually, conscience, I think besides the usual things about, let's say, compliance topics. I think nowadays what we are underestimating heavily and which we still haven't figured out how to actually retroactively, let's say, mitigate, is for one, for example, the spotting and reducing bias part. When we think about bias, the reason is because we're using a lot of historical data to actually now train, basically, AI models.
If we as human beings have not had the same behavior, let's say we had a racist behavior before, or we have a gender inappropriate behavior before, if we let AI learn from it, it's going to not only adopt it, it's going to amplify it and scale it even, and that's going to be really hurting us. If we as human beings have even changed our behavior and we know better now than we did, let's say, decades ago, how can we let AI do it on their own?
They can only be as good as we give it input on. The idea is, where's the balance? We want to have as much historical data as possible, but then that might reflect how wrong humanity was in the past, and how much do we actually then reduce the training data again because then it's too recent and it's not enough. In a way, I think we need to just actually find the right balance there to do it. The other thing I think is about disinformation. Not only the, I would say, intentional part of it, but we are intentionally generating bad information or wrong information, and we spread in the world. It's so easy to deepfakes and AI-generated videos. It could even look like it's real. People are making these videos of bombs hitting buildings and saying that this is a certain war when it's actually an AI-generated video. It's really difficult.
Then we have the unintentional part, which is not checking for hallucinations. That hallucinations are basically taken for granted. We don't validate it against anything. We're using it, then we make really important decisions with it, which could also be bad. Disinformation and the only countermeasure we can do is validate results and always critically think about them to then avoid and mitigate the risk of it as well. I think those are probably the two main aspects that need to drive our conscience, that we need to find solutions, how we can avoid the risk there.
Prem: Let's bring it all together. In the context of what Thoughtworks does, we largely help our customers build software solutions. We do a lot of other things, but this is one main part of what we do. In the context of software build-outs, can you explain with an example how you would apply the 5Cs framework and how it's relevant and important to do that right at the outset?
Tiankai: Absolutely. Let's talk about one of the most common examples of AI or generative AI use cases, which is a customer chatbot, which is I think the most common right now. Everyone tries to do it. Of course, there the 5Cs would directly be applicable. On the one hand side, let's say competence. That means that if we are building the customer chatbot, we need the right competence, not only how AI works and what might be the guardrails that we need.
Not everyone can do something wrong with that chatbot, but also what are the interactions usually that we have as customers? How do we actually let the chatbot do the right interactions and say the right things to our customers? That means, for example, we might need previous human agents from the customer service side who are actually then helping to validate what is the right and wrong behavior of a certain chatbot.
Then we have the collaboration part. Which, of course, is, let's say, how much is human tasks? How much is the chatbot tasks? When do we actually let them work together? What is the workflow there? For example. How do we bring the right people together actually to work together to make that chatbot also work? Which is important. Then we have the communication part, which is, of course, in this case, not only, let's say, an employee talking to the chatbot to get information out of what interactions happen, but also more importantly, what is the tone of voice of the chatbot towards customers?
How can we ensure that it doesn't sound too mechanical or robotic, but still having a certain brand tonality or a certain customer tailor tonality based on what we give it to? How can we actually ensure the communication to the customer still sounds and feels close to how they're used to actually have interactions with? Then we have the creativity part. Where it could also be about what messaging and what kind of, let's say, assets we're using to actually let it generate in the right way to then be creative with customers?
Lastly, conscience. We, of course, need to put in the right guardrails. We don't, on the one hand side, want to have any commercial risks, but we should also not start any bad behavior through bias to specific groups of customers, just because the training data has initially indicated that certain bad behavior was too specific.
Prem: Thank you very much. That really puts things in perspective. Now, let's try and bring this strategy together into some action. In Chapter 8, you include a diagnostic toolkit and self-assessment checklist. Which questions there are the most critical in your opinion, or which ones do you wish every leader would ask themselves?
Tiankai: I would say the questions around competence might at this point be the most relevant. I think with competence, we need to drill deeper than only generically, how high is AI literacy? We have that one survey that does ask everyone, how AI literate are you? Then you have a scale from one to five. That's not really helpful — everyone can just judge however they want.
It's more about if people are aware of very specific AI concepts, how to interact with them and how to actually do the right things with it. There are a few questions around out there. For example, do specific roles have a specific knowledge about how AI will impact their day-to-day job, for example? How are they dealing with it? Could be one way of it. Another question that I'm requesting that to be asked, it's about how are we structurally teaching and updating the knowledge around AI as well? Do we ask people just to go on demand and do their own thing on Udemy or something? Or, do we actually invest into our own workforce to train each other? Because that's also important. A few things I think are good to start from there.
Prem: Wonderful. Let's assume that somebody is just starting out with no explicit AI initiatives or strategy in place today. What would your advice be to them in terms of their first two or three moves, assuming limited resources of course?
Tiankai: I would say actually the first thing is to talk to your business colleagues, I would always say. I think what is the biggest challenge right now is that all the AI pilots we're doing are driven more by excitement for the technology and not actually by the value it has for the business. To actually drive business value, we need to understand what are currently the pain points for our business stakeholders in the core business processes.
I address it in the book too. We call it pilot theater, where, basically, let's put a shiny object there, look at this great chatbot, oh, everybody says wow, and then nobody uses it, and then it slowly dies, but in reality, we need to first have the difficult conversation. Like, what is currently blocking you? What is currently for you, the biggest problem in your business process? Then ask, is AI the right solution for it? Then build it, and then measure the result of it having solved the problem. This is something I think that should be first where you just understand where the problems are and then tailor the AI solution towards the business needs.
Prem: Right. You bring up a really, really good point which actually leads us to the next thing. Which is, you talked about pilot theater and then you talked about measuring success or failure of a human-centric AI strategy. How do you actually go about measuring in a reasonable amount of detail so that now you know where you're headed?
Tiankai: I guess the business way of doing it, so all the commercial metrics like return on investment and you break it down into opportunity costs, into incremental revenue, et cetera, et cetera, the usual way. What I think could be a nice counterpart, especially when we think about that often, I think ethical and compliant behavior is actually against commercial behavior. Often when you maximize commercial outcome, you actually need to be less ethical, even if that sounds really bad, but that means you need some kind of a counterweight to the commercial value you're generating.
I'm suggesting one metric called return on ethics, for example, which is all about where we measure actually the ethical behavior and the ethical feedback we get from the organization that we could against the return on investment, because we need to actually be both good and not just trade off one for the other in the right way, but actually we should drive it together. There are a few elements where we can have that. I don't think that's a one size fits all approach on how we should measure it, but just to consider both the commercial and the ethical element would be really key here.
Prem: We are coming to the end of this conversation, but if the listeners have to take one thing from this conversation, what would you hope that would be?
Tiankai: I think it would be, don't have the human side of AI only as an afterthought, but it should be always top of mind, because in the end, what are we doing all of this AI for? It's for supporting humanity, not for destroying humanity. We need to think human first when it comes to AI efforts, and my book, hopefully, gives the great impulse to do that.
Prem: Wonderful. Where can folks find more of your work? Website? LinkedIn? How do they engage with you?
Tiankai: The book is on Amazon and wherever people regionally get their books, they cannot order it. People can connect with me on LinkedIn if they have any more questions or want to discuss more as well. Yes, of course, we also have a lot of more Thoughtworks events where I will be speaking, et cetera. If people are interested, they can also just look into what Thoughtworks is doing, and I might show up there.
Prem: Finally to round off, what are you working on now? What's next in this and where do people go from here?
Tiankai: I would say I'm working currently a lot internally with colleagues on how we want to strategically address the blockers for going agentic. I think that is top of mind for everyone. We're trying to basically put together some frameworks and a few approaches of how we can really pinpoint specific issues and then solve them for our clients.
Prem: Wonderful. That was wonderful, Tiankai. Really, really appreciated the conversation. Thanks a lot and hope to catch up with you very soon with yet another episode on the Thoughtworks Technology Podcast. This is me, Prem, signing off for today. Until next time, thank you very much.