Brief summary
As companies race to adopt AI, many overlook the hardest part — the human one. In this conversation, we unpack the principles behind Humanizing AI Strategy and discuss how creativity, critical thinking, and cross-functional collaboration shape the success of any AI initiative. This episode is for decision-makers who want AI that works in the real world, helping them design systems that are responsible, effective, and grounded in their organization’s purpose.
Episode highlights
From data to AI: evolving the Five Cs
Tiankai’s new book Humanizing AI Strategy builds on his earlier Humanizing Data Strategy, extending the same Five Cs — competence, collaboration, communication, creativity and conscience — to the realities of AI in the enterprise, including issues like disinformation and bias.Plausible vs accurate: where LLMs really fit
He explains the shift from deterministic models (traditional ML aiming for accuracy) to probabilistic ones (LLMs aiming for plausibility), arguing that generative AI is better suited to creative, divergent work than to high-stakes, convergent decision-making.Human thinking still leads — even in a ‘PhD in your pocket’ world
Despite the “PhD in your pocket” hype, he makes the case that LLMs can rapidly synthesize existing knowledge but struggle to create genuinely new ideas, which still depend on human critical thinking and hypothesis-building.AI as a smart intern, not an autonomous teammate
Collaboration with AI is framed as working with a very smart intern: fast and capable, but needing context, constraints and ongoing guidance — especially as organisations try to automate more mechanical, repetitive knowledge work.Accountability and the myth of the magical teammate
Using examples like the Chevrolet $1 car chatbot incident, he stresses that AI doesn’t share accountability or build trust on its own; humans still own the outcomes and must design systems where someone is clearly responsible for what AI is allowed to do.The AI ikigai: why pilots stall before production
Tiankai introduces an “AI ikigai” lens — excitement, business alignment, ethics and feasibility — arguing that most AI pilots fail because they over-index on “cool demos” and under-index on purpose, guardrails and the skills needed to run them.Smaller, specialised models and focused use cases
He argues for narrow, expert use cases and smaller language models over one-size-fits-all platforms, likening it to humans who are most effective when anchored in a clear area of expertise rather than trying to do everything.Optimistic scepticism and the need for critical hands-on practice
With hype and misinformation muddying perceptions of AI, he advocates for “positive scepticism”: getting hands-on with low-risk experiments, learning where things break, and using critical thinking to challenge extraordinary claims.Amplification, ethics and the Dunning–Kruger machine
GenAI amplifies whatever is there — skills, biases, confidence and consequences — which is why he floats ideas like “return on ethics” and warns that LLMs can act like Dunning–Kruger machines, boosting user confidence without guaranteeing correctness unless we consciously design in critique, culture and guardrails.
Transcript
Markus: Welcome once again to Pragmatism in practice, and to our Humanizing AI podcast takeover. I'm Markus Buhmann, your guest host for today. Don't worry, Kimberly Boyd will be back next time. Once again, I'm joined by my colleague, singer, author, TEDx speaker, avoider of Puma and Nike footwear, and if time permits, our global director of AI and Data Strategy, Tiankai Feng. Hi, Tiankai. How are you?
Hi, good to be here. That was a very nice description of who I am. Thanks for that, Markus. [laughs]
Tiankai: It's all true though. It's all true.
That is true, yes. [laughs]
Markus: Tiankai, while the rest of us have been busy watching KPop Demon Hunters on loop over the summer, you've been busy putting the finishing touches on your new book, Humanizing AI Strategy. Congratulations. That really, really builds on some of the ideas you put in Humanizing Data Strategy.
I see you haven't bowed to publishing pressure and try to make things bigger by adding a sixth C, like copyright infringement. Can you tell me a bit about how you adapted the model as part of writing the book, and a bit about the process of coming to what's an emerging technology and strategy focus for enterprises?
Tiankai: Maybe, I think, let me just start with saying a little bit about the motivation of writing this book, and why I did it so quickly after the first one. The truth is that while I was writing Humanizing Data Strategy, I had a lot of thoughts already about the connection between data and AI. I actually had to hold back a few things about what my thoughts were on humanizing AI. It was, I think, just a natural evolution to then put it into a second book, where I just more focused on AI.
As you mentioned, I actually kept the same framework of the five Cs, which are competence, collaboration, communication, creativity, and conscience. Not only are there a lot of parallels to how data strategy should be and how we can learn from data strategy for AI strategy, there's also other interpretations of those five Cs. Just for example, it's also actually about how human beings talk to chatbots or how tech bots talk to humans, again, and there's a lot of other ways of how to communicate. Same is for conscience, where disinformation or bias are things that are much more present in the AI world, and those were things that we needed to address.
In that way, I think, it's definitely an addition, but even an expansion on the five Cs, and specific to AI strategies and enterprise environments.
Markus: Fantastic. You're busy on the conference circuit this year, doing in some places the greatest hits of Humanizing Data Strategy. I saw that Big Data London, There's been a lot of buzz around it. It flew off the shelves at Big Day London, so that was pretty awesome to see. I'm really interested in talking around some of the ideas that you raise in your framework, and maybe diving a little bit deeper into some of those pieces that leapt out of me, if that's okay.
Competence is really interesting to me. A lot of people nowadays conflate AI with LLMs. We know that's very much just the flavor. When we talk about competence, let's talk a little bit what your thoughts are around LLMs, whose goal is to produce plausible output rather than accurate output, and how sometimes people confuse the purposes that way. Do you have any thoughts on that?
Tiankai: In a way, that's going back to the overall difference between being deterministic and probabilistic. When we talked about, for example, this predictive and machine learning as it was now — we almost call it traditional machine learning — it was much more about deterministic, and it was supposed to be as accurate as possible. With LLMs and all these generative AI tools that we have, it's much more about being plausible, meaning it is just statistically likely it can be relevant, but it cannot confirm. It will never have the same output over and over again based on the same prompts.
That is something that we could argue is a little bit more human, because based on different routes and the different environments we're in, we might get different answers. In an enterprise environment where we actually need to make important decisions and take important actions for organizations, it can be a little bit of a liability. This is, I think, where it gets really interesting because that means also that probabilistic behavior is maybe more suited for creativity work, for brainstorming, for ideation of things, and diverging, so to say, in ideas, whereas deterministic tasks are more towards converging and actually boiling it down to one specific output.
The question is, what are the right use cases then to use for GenAI, and where should we do it and where should we not do it?
Markus: It's one of those things that's really interesting. You hear a lot of the chat recently, maybe not so much now, AI visionaries talk about having a PhD in your pocket. This is really, really interesting to me because you and I are both AI practitioners and we've deployed these things into the wild, and AI is necessarily backwards focused. It's in the past. It's about taking something and then trying to create a model based on that past data and extrapolating that onto now. Whereas when you're doing a PhD, and you'll know this from your postgraduate where you are advancing the frontiers of knowledge, which is creating new and novel connections and so on.
It's a provocative question. How do you see that PhD LLM chat playing out, having been in both worlds?
Tiankai: I think you're indicating already a lot of what I'm thinking about that too, because even if we would break down a PhD into maybe two main areas of tasks, it would be, one, understanding the old, and then it would be also creating the new. I'm not doubting that LLMs are learning much faster from given tax information or whatever research development information quicker and making a lot more quicker sense out of it, but creating the new part is definitely tricky.
Because this is the part where we as human beings are applying critical thinking that is not only based on historical data. We don't even understand ourselves how our brains properly work, but we are applying some kind of human sense of thinking on it to actually come up with new ideas, create new hypothesis, and then validate or not validate them to actually then create new things, and sharing that knowledge with the world.
Again, for me, I think original thought and human thinking, leading AI as a support, is much more meaningful than letting an AI do the PhD task on their own, and as a human being, you're just catching up and you don't actually understand anymore what it's doing, then what is even the point of it? It might be just wrong, it might be just probable, but not actually true and not accurate.
Markus: You know what, this brings us into the next of the Cs: collaboration. We talk about co-creation with people to create an AI, but what if co-creation is with the AI? You talk about that level of creativity. How do you see an AI working with someone to create something together?
There have been a lot of interesting quotes about the relationship we have with AI, I forgot actually where I got it from or who the original person is that said it. They said that ChatGPT or any kind of those GenAI tools feels like just a really smart intern. It's something that you really need to guide very hands-on. They're very quick and very nice in what they're doing, but they still actually need the guidance. This is, I think, a very nice metaphor to think about. That maybe given the right, let's say, instructions and the right context how to do things, they're going to be really good at it, but they're not thinking on their own.
They're just going to do something generic, and it's not going to be helpful at all. This is I think also where co-creation and basically collaboration with AI is going. If we can collaborate with AI, that means that we need to spend more time actually guiding AI, and giving it the right instructions. That also means that AI is only reliable for the manual repetitive tasks that don't require that much thinking. More than mechanical work, if you so want to say.
That could be a nice option. Even just the industrial revolution was like this, where we actually replaced human labor with machines, like physical machines that are actually doing things. Now of course with office workers and knowledge workers doing similar tasks and manually, I don't know, moving numbers from Excel file A to Excel file B, those things can be maybe automated, but I think we still need to be in the driver's seat with our human thinking.
Markus: I joked about this to Julie Sweet, the Accenture CEO. Too much focus on projects that don’t move the needle, like collaboration. "While working together is essential in business, reinventing a company for AI isn’t an excuse for more meetings because collaboration isn’t a business strategy," she said, "When the answer is using AI to collaborate more, this is another big red flag." How would you react to that?
Tiankai: It's an interesting one. I understand when collaboration can be seen as too much, when there's a meeting culture where every single thing and every single decision needs to have a meeting about, and that can really paralyze a company if everything is just stuck on meetings. At the same time if you don't actually collaborate well, you stay isolated and you don't actually understand the context in the right way. I think specifically for AI, it's just not as understood in technology yet that we can afford not to collaborate. We need all of the expertise we can get to drive AI to success together to not only see the opportunities, but also of course see the risks together. We cannot do it on our own, and specifically not with a few individuals, but we need to have the right context from the right people to do it.
Markus: There's something that you wrote on that theme in your book, and I'll read it out to you because I think it's so important. That also means being honest about the limits. "AI isn't a magical teammate. It doesn't share accountability, it doesn't take responsibility when things go wrong, and it certainly doesn't build trust on its own. That's still our job." The cynic in me will say, is that true though?
I think that's so important, because when you hear stories of Swedish buy-now-pay-later firms firing all of their customer support staff to use with AI and then promptly rehiring them all, that taking responsibility and accountability, it's highlighted starkly. When you were writing that point, was there any specific use cases that you had in mind or anything that you'd seen that could really bring that to light?
Tiankai: Absolutely. I think I'm referring to quite a lot of these example of failures with AI in the book anyway, but I think for this specific question, one of the use cases that comes to mind immediately is the Chevrolet one, where basically they had very quickly installed the chatbot and they empowered it to actually make contractual decisions with customers. One person actually was using prompts to make it work that Chevrolet would sell a car for $1, and it was contractually obliged to do so then because it empowered the chatbot to make this kind of agreements. They had to sell the car for $1 in the end.
That is, of course, for all of the rest of us, funny and very entertaining. If we think about these things being possible and you make life-or-death decisions, or you make societal decisions that are really impactful, then it can be really scary. This is the thing, where if now that chatbot has done this decision, whose fault is it? You can ask around. Everyone feels not responsible for it. They would say it's because the AI did it, but AI is a technology. Someone has to own up to what the technology is.
This is where also maybe a link later on to the Conscience chapter, where it's so important that we cannot actually say that it's the AI's fault because that doesn't hold up in our lives. Also, the trust that we do is a human feeling that we have. We can never had said that I'm trusting blindly in iPod without ever asking where it's coming from or who's behind it and who created it. That, I think, has always been a part of us thinking about if we want to use a certain tool or not, from what company it comes, and who's the CEO maybe of the company, and what they're doing versus the other way around. I think even if we don't want to admit it, it's always a human feeling.
Markus: Let's think about something you brought up there around responsibility, communication, almost, and things, and move to communication. One of the things that struck me about finding your AI purpose, that kind of ikigai diagram that you had, you'll be familiar — and everyone in the AI enthusiast and practitioners is going to be familiar — with a NANDA project paper from MIT that said 95% of AI projects never find their way into production. How do you think your purpose diagram plays into that observation, and what use cases do you see hitting that sweet spot of actually moving out of that POC into the wild?
Tiankai: Just to explain what the ikigai diagram is maybe, it's basically a Venn diagram [crosstalk] different elements. There's four elements to it. One is what is exciting to do with AI, the second one is what is aligned to business objectives, the third one is what is ethical to do, and the fourth one is, is it feasible to do. The reason why I put these four things together is because I believe those are the main reasons why we are failing with AI pilots and then moving not in production because we have not considered all four elements.
Typically, it all starts with what is exciting to do, and that is the only driver we have. The newest model, the newest chatbot, everything we can do feels exciting, but then we actually don't move anything with it, or we don't actually make any impact with it. I also give it a name called pilot theater in the book. We all show something really cool. Everyone says, "Wow, that's really cool," and then no one knows what to do with it, and then it just slowly dies. [chuckles] That is what's happening a lot, I think, in organizations.
If we can basically address that it's aligned with business objectives, meaning it actually does solve a business problem and it helps us with our core business processes, that's one step of it. Knowing that it's not doing anything bad to society or to our own organization is, I think, really key, otherwise you get very quickly regulatory problems or you're unfeasible to do it. By feasibility, I mean not only technology, but also the right skillset to have to even run this kind of AI applications. Only once you have clarified that all of these are working together for the use case that you have, then this is purposeful. Then you can actually drive it to success and finally ideally move from just piloting to actually production.
Markus: It's really interesting because the thing is, is that there's this very much-- This approach is, well, whatever the question is, we'll do AI. The more specific the use case, the more applicable it is widely, and that we've been working on. That's starting to get a bit at traction in a number of large regulated organizations where the quality of definitions for your semantic layer become more and more important.
We've tuned a chatbot to use ISO effectively. It's very small, very specific, but it's broadly applicable. Our genius colleague, Ben, has done things like rolling your own coding agent, for example.
Exactly.
Then Ben and I are working on doing reverse engineering data contracts using AI out of unstructured data, so Slack chats, emails, Word documents, where you can build up a data contract out of that using the Open Data Contract Standard. Again, it's really, really specific. I feel like that's the direction to travel with this. Smaller, more personal, more focused, and being very, very strict around guardrails. How do you feel about that?
Tiankai: This is one of the things where I can even go back to human analogies. I think it was always more useful to be an expert in a few things rather than trying to do everything well, because in the end, you do nothing actually well. Of course, we talk about being generalists and so on, but even then it's helpful that you're actually rooted in some kind of anchor of expertise. That you can be really good in a lot of things, but it has to start with one good thing that you're actually good at, and the rest can actually be related to it.
It's the same probably with large language models or going the direction of small language models, to actually not to try solve all of the problems in the world with one application, but rather have dedicated applications that solve very specific problems. Then you have a much more efficient and effective way to actually make them really great at something rather than just to make everything mediocre.
Markus: I'm going to take a little bit of a hand break turn here, and there was one thing. I was looking at the role matrix that you had and the role descriptions that you had, and two things leapt out at me here. Your definition of a skeptic, and I joked about this at the time. Paul Sagan famously was a skeptic, and he said extraordinary claims need extraordinary evidence. These claims that we're seeing from AI vendors, they're extraordinary, but there's nothing to back them up. Whereas an old colleague of mine who was a standup comedian, he said, "A cynic is an optimist who's been let down one too many times." Sometimes you vary between the two.
Tiakai: It's really, really interesting here around where the technology is moving so fast and the claims are moving so quickly. How do you remain optimistic and skeptical? I'm using skeptical classically, but in a positive way when it comes to AI, because I think it can be overwhelming.
Really, if we deep down look into how the world is moving, then technological advancement is a good thing. I think overall it is net positive because it does actually solve a lot of very difficult problems that we have. If we just look into actual good examples like AI helping with cancer research, or it actually helps with equality and making better policies, maybe on governmental level and in certain places, that is a really good thing that it's doing.
Unfortunately, though, there are so many people that are either voluntarily or involuntarily making wrong claims or inaccurate claims about it, and thereby muddling the perception of AI, that it's very hard to know what to believe and what not to believe anymore. I think the only way to actually stay clear of it is something that we anyway should do when we work with AI, which is rely more on critical thinking. If we think something is too good to be true, it probably is too good to be true, but to actually have that gut feeling about something being too good to be true, we need to root it as some kind of expertise and experience.
I can only urge everyone to actually, I guess, have hands-on experience with AI, experiment with it, do it with low stakes, but never give up on it and always just try on new things. At the same time, always be knowledgeable and maybe proactively gain the information and knowledge about what can go wrong and maybe the pitfalls and the traps of doing wrong things with it, and keep that mindset to then actually be able to also disprove maybe claims that are not actually valid and thereby help yourself to actually make clarity in the world.
Markus: That's absolutely right. I think what's interesting is that there are a lot of claims out there that, let's say, AI is taking the job of developers, but our research is actually saying that actually might be the opposite. The more you use AI, the more developers you're going to need because it'll amplify their strengths and weaknesses. What do you think about that?
Tiankai: If now AI is a great supporting tool for developers and they're still standing in the driver's seat, all of the developers' productivity is going to increase. That doesn't mean that now everything is great, it's just going to basically heighten the expectations towards developers. Now you're a developer with AI, so you expect tenfold the output of what you had without AI before. We're being all measured suddenly, specifically developers, by just higher targets all of a sudden and high expectations.
In a way, I think the world is going to move, expectation's going to move. Just like everyone finally realized we're faster with the internet than before, we're just all going to have that new standard, and I think the expectation is going to move as well.
Markus: I think as an accelerator it's a really interesting proposition to be able to do things, speed up processes. When we start to think about creativity, one of the things I really liked was this idea of original disruption versus generative refinement. The bit that really struck me was that you said not enough people realize this. GenAI only ever sees the outputs of the creative process. It doesn't see the messy, painful journey to get there. Do you want to maybe elaborate a little bit on that?
Tiankai: When we talk about LLMs or machine learning generally, they always learn from historical data. For Generative AI, especially when it's about creativity-related work, it's learning from text data, like from books probably and from articles, et cetera, or even from images, from artists, pictures, paintings. It learns from music as well, in many cases. All of those, as you described, are only the outputs of human creativity. This is what happened at the very end of it, and when all of the thoughts and the ideas came together into one artifact at the very end.
If AI is only learning from those, it doesn't actually learn from what happens in our brain, it only learns from what came out of our brain, so to say. This is what makes us as human beings so unique because we don't even understand our own creativity process yet. We are just having good ideas and we have disruptive ideas, and we come up with them and then turn them into reality. If it's only learning from the output and not from the process in our brains ourself, it will be always at best only in imitation and it will never come close to the original.
This is why I think that it's really the differentiation of human beings being the original disruptors in the ideas, and AI only being at best refining those and iterating those a little bit more. The creativity has to come from human beings because that's the only original part of creativity that we will have.
Markus: Slightly facetious follow-up question is, when you wrote the book, where did you use AI to generatively refine the [unintelligible 00:20:50]?
Tiankai: [chuckles] Obviously I did a lot of proofreading with AI, that is for sure. That's one thing. I have to say sometimes as a writer, I keep on using the same words that at some point I'm getting so tired of using the same words. I'm going to just ask AI to give me some synonyms because I'm even tired of just writing or reading it myself. Those are just examples of still AI, just refining it. I'm still actually the one who has written all of it. It just makes it slightly better from a reading experience.
Markus: I think this is something that you were talking about before. It's those best creative setups. They're not one or the other; they're partnerships. When you're thinking about creativity and AI, I think that's really rooted at [unintelligible 00:21:30]. I'd actually want to read one specific thing out. There's a comment you made, and I would like to celebrate it and read it very, very slowly for everyone out there to get the hang of it.
"If your AI strategy only focuses on generative use cases, you're missing half the picture. Traditional AI systems are already embedded in your organization, like credit scoring tools, demand forecasting engines and diagnostic models. They aren't just operational assets. They're part of your creative infrastructure. They support better decisions, faster problem and innovative thinking."
I want to applaud that because I think people take that for granted, and it's just LLMs. People are now saying LLMs equals AI. Really, they're just a very small flavor of that. How do those conversations play out with some of your other clients? How often do you find yourself reminding them of those other use cases, and how do they react to that?
Tiankai: That's a good question. I don't even have to be the one usually reminding them. It often happens when we just run these discovery workshops where we have different parties in, where the people that have been working for years already on machine learning algorithms and models, they are the ones who would then bring it up. They're like, "Hold on though." We act like AI is completely new and everything, but we have been doing machine learning for quite a while, and it's been already enabling and supporting heavily these business processes that are critical for our business. Let's not act like we're reinventing the wheel here. Let's build on it instead.
That's usually very helpful because we can then, as consultants, definitely go with it and we can say let's learn from what already worked so far and what didn't work, and let's build on it and let's basically make generative AI successful because of those. Maybe even connect it to what we already have in place.
Markus: Finally, let's talk about conscience in the book. There's a couple of things. Somebody, I can't remember who it was, and I wish I could, joked about AI being a way to make mistakes quicker and at scale. You put it a bit more gently. It doesn't just amplify our productivity, it amplifies consequences, revenge effects. Those delightful moments when innovation spiked back, not because anyone was evil, but because complexity has a sense of humor and it's darker than we'd like.
I don't know. I wonder if there was a German word for delighting in somebody else's misfortune. Maybe think of one. I don't know. Is there some schadenfreude with these revenge effects, as you call them, that everyone is secretly delighted to see. What do you think about that? I've had conversations with people where they say one of the things about AI and LLMs is that they don't actually solve technical debt, they hide it. Yes, I think so. I think it's more about that with technological advancements and not only for AI, but if we are just starting to use it and we are basically applying it to new use cases that we haven't done before, then there's always the danger of not being able to predict the consequences. That is something we cannot be too reckless about. We just have to be a little bit careful more and take a step back sometimes to maybe spend more time thinking about what could go wrong, because some of the things are just too hard to recover from.
We can actually apply schadenfreude things where it doesn't actually impact any life or death situations or it's light enough to still be entertained by it. If it actually starts making choices again about human lives and society and it goes badly, then there's nothing to laugh about anymore. It's probably also too hard to revert what was already done because AI did it so fast and so scaled, in a way, and has done it so quickly.
Markus: You're starting to see these consequences play out. Not life or death situations, but I think the situations where we can all get behind. Where judges in the courts in the UK and in the United States are actually catching out lawyers who are using LLMs to generate huge swathes of their cases. They're literally making up law, like cases, citations, all of those things, and they're starting to get fined. You need to verify everything. It's those guardrails. Being dated people as well. You talk about alternative metrics, and my favorite one, I think, would be return on ethics. How do you think you might go around modeling and measuring that in a corporate environment?
Tiankai: That's a very good point. I think this is more of the, let's say, disruptive ideas that I have in the book, because I know that not a lot of people are probably doing it nowadays anyway. The idea is basically as a counterpart to only commercial metrics, because only commercial metrics can often be conflicting with doing the right thing from an ethical point of view. Not only having a return on investment, but a return on ethics would hopefully balance it out a little bit. The key thing, as with any calculation of ROI, is how do you even define the return? What are the metrics to do there?
I think we could do it from different levels. On the one hand side, it might be more mindset-based. We would ask surveys about what people's attitudes are and how aware they are of the right principles to work with AI, but also behavioral metrics. Actually people doing the right thing, following certain policies, using the guardrails that are in place. Maybe flagging and reviewing each other's unethical behavior voluntarily or involuntarily. Basically putting that in place and measuring them and see if that is actually then being applied. Ideally, we have a proof that we are actually investing in being more ethical and it comes back to actually being ethical.
Then the very end, you might even have just avoided fines. If you would like to take compliance in the end out of it, you can of course avoid fines, avoid people going to jail, that also could be part of it. I think there are a lot of ways to do it. We just need to consciously do it and prioritize it too.
Markus: There's one more question I had. It's not part of your book, but GenAI has been informally described as a Dunning-Kruger machine, where it confidently gives you an answer with absolutely no clue of whether or not it's right or wrong. How do you tell that story to people at the start of the AI journey and without dampening enthusiasm too much?
Tiankai: It is so true. I think many of those AI models out there and LLMs, they're actually making you feel "very confident." You basically say, "I have this idea. What do you think about it?" ChatGPT, for example, and would validate you so much. They were like, "Oh, this is such a great idea. No one ever has done it." I'm exaggerating, but "This is like the perfect idea. You should really continue with it. This is amazing." Even if this was not rooted in any expertise at all or any check with what exists already, it gives you that feeling, "Wow, I;m onto something great."
This is right. Dunning-Kruger effect, with very little knowledge, you feel very, very confident about it. The funny thing is, as soon as you then say, "No, I want you to critically review it and specifically compare it to any existing ideas that are out there. I want you to give me as if you're like an academic professor that is pulling it apart and giving me very critical feedback," then it does that too. It's because it's by default and positive towards what you do. It first amplifies your confidence, and only by you having a healthy level of confidence versus competence in your own character, you are able to steer AI actually into the direction of being the right partner for you, and not just like a hype person.
Markus: I think that what you've said there is so, so important because it-- Again, you come back to the scene. It's through the book and there's a subtext here all the time. It amplifies what's there, your skills, your biases, your confidence, your competence. You've got to have the ability to step back and examine not just yourself, but just the tool and the way you're using it. I think that's a very, very human skill to have, the ability to evaluate oneself. How do you feel about that as a proposition?
Tiankai: I think that's a really good point. I'm also feeling like what if we would, in organizations by default, fine-tune AI models to actually reflect the right behavior that you want it to have. If it's not by default, just validating each and everything, but actually when an employee is using it, always find the balance between validation and critical review. That is something that we can put in by design.
Oftentimes enough, we don't think about all the things. Let's just roll it out as quickly as possible. A foundational model is good enough. Even if it doesn't have all the context, just get it out and then people start using it. Adoption goes high, KPI achieved, all good. Then down the line, you get a lot of the bad experiences and things might break, which then is maybe even an example of just short-term versus long-term thinking. That's true.
Markus: Are you saying that enterprises should train their versions of their versions of their foundational models to reflect their stated cultural goals and aims? Are you saying they should be looking to do that?
Tiankai: I think so, or as you maybe pointed out before, if the trend is anyway going towards smaller language models and more specific applications of it, then it's even maybe more efficient to do it just for specific use cases, and then you have way more control and more guardrails in place that you can easily implement instead of trying to do it universally for everything.
Markus: No, Tiankai, thank you. As I say, we're coming right to the end of conference season now. At the time of recording, you are speaking in Dublin this week. Will there be any more opportunities this year for people to see you talking about your book? Have you got anything planned early next year for people wanting to catch up with you in person?
Tiankai: Absolutely. Not being completely sure when this is coming out, but I can recommend the Forward Data Conference in Paris. That's end of November. For those who are in Europe or want to come to Europe, that is a really nice one to go to. The next one that I'm very excited about is the Data Modeling Zone in San Francisco in March 2026. If anyone is based in the US listening to this, and you might want to come around there, I'm actually there giving a three-hour workshop on Humanizing AI Strategy. That might be another opportunity to actually work really hands on with me through the contents of the book and try to make hands-on plans on how to implement them.
Markus: Tiankai, I want to thank you for your time. I appreciate you're super busy at the moment. As I said earlier, the response to the book has been really positive. I enjoyed it immensely, and there was a lot in there that really leapt out, has been really thought provoking, and just genuinely thoughtful. It remains to me to thank you for your time and to thank everyone for listening.
Tiankai: Thanks a lot for being the host again, Markus, this was a lot of fun. Thank you.
Markus: No, thank you. My pleasure.
[00:31:59] [END OF AUDIO]