Brief summary
There's been a lot of discussion and debate in recent months about exactly how software engineering will be reshaped by AI. While it remains to be seen what the discipline will look like once things quieten down (if they ever do), one thing has been somewhat neglected: what does software engineering actually feel like in this AI-intensive environment? If we're no longer writing code, or even interfacing with it in the way we're used to, what does that mean for our professional experience?
On this episode of the Technology Podcast, host Ken Mugrage is joined by Nate Schutta to discuss the software engineering experience today and to dig deep into what the work feels like when AI agents change our relationship with code. Nate is one of the authors of Fundamentals of Software Engineering (alongside Dan Vega) and appeared on the podcast in May 2025 to discuss the book; with so much change having taken place since then, Nate is perfectly placed to offer a perspective on what software engineering means today for an industry navigating significant change.
- Learn more about The Fundamentals of Software Engineering.
- Listen to Nate discuss the book on an earlier episode of the Technology Podcast.
Ken: Hello, everybody. Welcome to another edition of the ThoughtWorks Technology Podcast. My name is Ken Mugrage, I'm one of your regular hosts. I'd like to introduce to you Nate Schutta. Nate?
Nate: Thanks, Ken. Hi, I'm Nate Schutta. I'm a cloud solution architect, I guess, is probably the best way to describe me, although my general way of saying what I do is architect as a service. I go places, talk to people about architecture and all that kind of fun stuff. I do admit the first time someone called me that, I did sound the acronym out in my head and realized it may not have actually been intended as a positive thing. It may not have been a compliment, so I do understand where that comes from.
Ken: It's great to have you, and being a little modest there, you also had a book that just came out recently. Tell us about that briefly.
Nate: I did. Along with a good friend of mine, Dan Vega, we wrote The Fundamentals of Software Engineering. This actually grew out of me sitting on the couch one day and thinking of all these little tidbits that you pick up as a software engineer over your career.
I remember I worked with this guy who had a binder clip attached to his wall and you'd go ask him something and he'd think about it and he'd grab his binder clip and flip and like, "Ah, here's the command right here." You pick up all these things, and so I started jotting them down in a note on my phone thinking, maybe I'll get to nine, maybe there's a listicle talk here, like nine tips on being a better software engineer, you'll never guess what number seven is or something like that.
I very deliberately stopped typing when I got to 42 and I realized, "Okay, there's a there to there," and so I reached out to my good friends at O 'Reilly and started talking about it, started doing it as actually an online training kind of a thing. We did a software engineering fundamentals in three weeks and realized, "Okay, I've got 15, 16 hours of material here and ran it and workshopped," and they're like, "Yes, let's turn it into a book." The rest is mostly history, and yes, finally came out here last fall. Dan and I have been out touring, trying to get people as interested as possible in it and I've been having a lot of success with that.
Ken: It's certainly an important time for fundamentals despite appearances. I must say, in a lighter note, those of us of a certain age are going to latch onto that 42 number quite a bit. Dan and Nate have apparently found the answer to the life, the universe and everything.
Nate: Exactly.
Ken: What's funny is we were talking a little bit earlier and like in my career, I used to write code for a living, but I haven't in many years. Although I have produced more applications in the last several months than in the previous 10 years, I won't say I've written any code, but I've produced applications. We're in a moment where people are saying, "Hey, I can build software without knowing how to code." What's that feel like from where you're coming from? What's your take on that?
Nate: The ironic thing to me about this era that we're living through is that I would argue that the fundamentals are more important today, because we are generating more code, and so there's more for us to deal with, handle, et cetera. I think the interesting part of this is we're probably reading even more code than we did before. I think for a lot of us, regardless of what your sort of career arc is, we rarely teach people how to read code, we just expect you to figure it out.
I know it's certainly coming out of university, I thought, obviously as a software developer, I just bang on the keyboard for eight hours a day, and then you get on a real project and realize that most of my work is reading through someone else's code and going, "What the heck, what're they trying to do here, and why does it do this?"
It's like loading that problem into your brain is 80, 90% of the job, the typing is the easy part. Now, we have the situation where our tools have given us even more capacity to generate code. I don't think that always generates good software. Dan coined this phrase earlier this year where he said that, "Code is cheap, software is expensive." I still feel like that's one of these missing pieces. He likes to show this example where someone says, "Dude, I just created an app. You're out of work." He said, "Great. Can I see?" He says, "Yes, and it's a link to his C drive," and you go, "Ah, okay."
You know there's more to it than just generating code, you've got to get that to production, you've got to make sure it's secure, you've got to make sure that it can scale, that it's reliable, that you can maintain it. There's a lot more to being a software engineer than typing. I think to me, that's one of those ironies, we focus so much on the writing part of it when-- I don't know about you, but I've never once been asked in the interviews, "Nate, what's your typing speed? How many words a minute do you type, because that's what we care about."
Typing has never been the bottleneck in software, that's not the hard part. I feel like we've got some added capacity, and I think that opens up some interesting avenues that we didn't have before. I think about this this morning, that floor for software got a lot lower. I think throughout my career about how often you might've had this team ask for something like, "Hey, we need this widget. We need this tool." Our answer to them is, "Well, sure, you can have it, but it's going to be 18 months, and it's going to be seven million dollars." They're like, "Oh, yes, I don't have enough budget for that. I don't have that time." You couldn't build this for them. Whereas now with these tools, we actually could build them this custom lightsaber that they need that might only be for these five or six people, but it isn't a multi-month, multi-million-dollar project involving 45 engineers and everybody else.
This is something that now all of a sudden it becomes more economical to meet some of these more niche needs than it would have been in the past. I think that's exciting. I'm really interested to see what that opens up for us. As an example, Dan created this custom content moderation system for himself. He'd been stitching a few tools together as he creates a new video, or he's writing a new blog post or putting together a newsletter.
He created what, for him, is exactly what fits his hand. He creates a video, it gives him, "Here's a transcript, here's a summary," and just simplifies his life. He could have written that before, but it would have been months and months and months, and he'd been like, "Ah, it's not worth the effort. I'll just stitch these three things together."
Now, with today's tools, he can put that together in a couple weeks, a couple of days, and now he's got that custom thing just for him. Now he's not going to try to sell that. There's a huge jump from that to enterprise class. I'm running billions of dollars through this every single day. I do think that's a really interesting aspect of this that I don't feel like we've explored enough is, what are all these little niche tools that we're going to be able to build because it's now economically viable? That in the past would have been like, "Ah, no, not worth it."
Ken: It's funny you say that because the apps I've written are mostly for me to use and that sort of thing. What we see a lot of people that are even more on the business side, CEOs and marketing people and salespeople, going, "Why did this always take a year? This is easy." They're not thinking about, "Oh, but is it secure?"
Nate: Right.
Ken: You've been through a few hype cycles, right?
Nate: Oh yes.
Ken: You're active in the spring ecosystem and watching cloud and stuff like that. Where's this hype curve compared to those?
Nate: It feels like it's accelerated. It's just that it has happened so quickly. Maybe that's not really the case. Maybe they're all this fast, and you just forget over time. It is going to be very interesting to see how this all plays out. I do feel like maybe there's a bit of an oversimplification of what it means to create an app. I think there's a lot of folks who are like, "Oh, this is so easy." I've seen this commercial. My grad students and I were watching this earlier right after the Super Bowl, and he's like, "Oh, did I just create an app?" "Yes." You put something together, that's cool. Again, you're missing all the utilities. You're missing all the architectural things. You're missing all of them.
We've all learned a lot through making our own mistakes. I think every software engineer, at one point or another, has written a script that deleted part or all of their hard drive. You just think about some of those things where, yes, you may have put something together, and it may work in these cases, but did you think of the edge cases and what happens when month end falls on the proverbial super blue blood moon eclipse or some other weird thing? It's always time-related.
I feel like that's still missing from that equation. All the other things that we've learned through trial and error and often a great pain. We'll learn those lessons again, which seems pretty obvious. I am curious to see how that plays out. it does seem like there's-- You hear different stories. You hear like, "Oh, 100% of our code's now generated by AI." Then in other instances where, "Well, actually it's had no effect," or 95% of these projects have failed. There's a lot of FUD here in my experience.
Ken: Do you think it's the same lessons but a new cohort?
Nate: Yes, that does seem to be the answer. I read something where every five years, basically, half of our industry is turned over as we bring new people in, and sits like you're constantly dealing with folks who have not experienced that pain. That was part of the impetus for us to write the book. It was to help folks maybe skip some of those potholes because Dan and I fell into most of them.
"Here's our roadmap, avoid our mistakes." I have a son who's just started college, and I remember telling him when he was younger, "You can learn from my mistakes. You don't have to remake all of them." Although he did say to me one day, he's like, "Well, Dad, sometimes I just need to make my own mistakes." I'm like, "Okay, I get that." You can also, though, take advantage of my experience. I feel like that's part of what we owe the next generation. It is to help them avoid the mistakes that we made. Now, I will admit sometimes when you make your own mistake, it sticks a lot more in your brain, and you're like, "I'm never going to do that again." Sometimes just reading it or hearing you and I or something like that wax poetic, doesn't make it as real as, "Oh, I just lopped off my finger. I guess I shouldn't make that mistake again."
Ken: Mine's 28, I urge you to keep fighting the good fight, but not have too high of expectations. I don't know if it was you or Dan, or I'm not even sure the quote's accurate, but we heard that provocative claim of the universities and boot camps teach you to code, but not to engineer.
Nate: Yes.
Ken: What's the gap?
Nate: I think that as a graduate of the computer science undergraduate program, I understand some of the limitations of what we are trying to do over the course of a four-year program. Mostly as an undergraduate, you're being taught to go to grad school, like, "Here's the things you need to go get a master's or PhD." Boot camps are mostly just trying to stuff a bunch of stuff into your brain as quickly as possible and get you out the door and be employable in some number of weeks. We miss a lot of the, I think, bigger picture.
We have this tendency to focus on the typing bits and here's a framework and here's a language and here's how you talk to a database, and not the bigger picture of how do you make code software, and how do you turn that into something that is robust and resilient and scalable and again all these 'ilities' that we think about as architects? What does it take to move to that next level? Instead of being tactical around the code compiled, ergo it must work to, "Well, now I've got to write some tests and then I've got to work with maybe my QA people, and then I've got to work with my BA or my product owner to understand what the business actually needs, and I've got to know how to communicate with other human beings."
I think that's been one of the biggest shifts I've seen. There was this stereotype, certainly when I was first coming up, that software engineering was just like lone wolves and just the cowboy coder banging away. Boy, I don't think I have ever in my entire career worked on an application that was just one person. Just they're all team sports and team events, and I think that surprises some folks to understand how important communication is. That you've got to be able to communicate with other technical people, but also people who aren't as technical. How do I translate that? How do I work with my stakeholder to understand, "What do you really need this system to do?" How do I get beyond them trying to solution me to, "What is your actual requirement here?" Then how do I help them think through the edge cases?
Humans are really good at happy path, but we all know that that's not the hard part. I feel like software, we can get the first 80% done relatively quickly, for some value of quickly. It's that last 20% of, "Oh yes, what do we do here? What about when this happens?" It's all that weird stuff that only occurs when contact with real users happens and they do things we never anticipated.
Now you'd like to think some of those are easy to anticipate. I don't know if you remember earlier this year, I think it was this year, there was a power outage in San Francisco, and all the Waymos were like, "Uh-oh, we don't know what to do when the stoplights are blinking." Nobody bothered to program that in. I understand that there's a lot of edge cases that you and I could sit down for a week straight and think we thought through them, and we'd still miss a whole bunch. I would have thought a power outage where the stoplights are blinking would have been something that would have occurred at some point that, "Oh yes. That's going to happen."
There are those things that if you're not thinking about, if you don't work through that, it's going to crop up, it's going to bite you. Hopefully, we're a little more proactive about it. To me, that's the difference from just churning out code and building software of actually doing the engineering that goes into building robust systems.
Ken: It's funny you bring that up because we had an event that we hosted a couple months ago, The Future of Software Engineering, I think it was. One of the topics was things about what has to be true for self-healing code. Somebody made the comparison of using software as a, not for retrospectives, but for disaster analysis or failure analysis, or that sort of thing.
A human often, they'll hear a story, and they'll be like, "Oh, that reminds me of a thing that happened two years ago. Let me go look up that email," or what have you. AIs just don't have that context. They may be able to go line by line through the code, but it's funny because, personally, one of the biggest tricks that I've started using with Claude Code is telling it I'm frustrated. Actually say, "You keep getting this wrong. Stop guessing," and it will say, "You know what? You're right. Let me go research this."
Nate: Sure.
Ken: It’s funny because at that event we were talking about context, but another thing that came up, and I’m really curious about your take on this, there was somebody there, and it was mostly pretty experienced people. One of them was a senior manager. It’s Chatham House Rule, so I’m not going to say who or what company, but they were a pretty senior manager. They said they would have engineers come to them, and they would say, "Oh, the AI doesn’t work for that." Their response was, "Well, did you help it?"
The engineer would get this weird look on their face, and it’s like, if it was a grad student or something like that, you would say, “Well, have you tried this or have you tried that, or did you do that with your AI agent?" The guy is like "Well, no, of course not." "Well then, you failed, not the AI." What’s your take on that? Are developers now managers?
Nate: Ooh. Boy, that’s a good question. I think it’s possible. I think it’s the one constant in my career. I certainly am not unique here, it has been change. I had a dear friend of mine, we were playing golf one day, we’re walking off the 15th tee at my golf course, and he said to me, "Nate, don’t you think it’s time for you to get into management?" I’m like, "Whoa, dude, that’s fighting words.” Like, “Why are you saying this to me?"
He’s like, "Well, when I was about your age, I was getting bored. I could do my job in my sleep, and then I got into management, and I became. I got to be a force multiplier and help people grow, and you’re about that age when I did that. Don’t you think you should be making that transition?" I looked at him and I said, "Well, John, I think the big difference is my job’s never been the same for more than about two years. I don’t feel like I’ve ever been able to just do it in my sleep. It’s constantly evolving." To me, that’s what I look at here is this is just another evolution.
We don’t quite know how it’s all going to play out yet. It’s still pretty early in that. I do think we’re still struggling to find the best way to work with these tools. It’s clear that good prompting helps, getting good guardrails helps, but it does feel somewhat like that. Maybe the right analogy is you’ve got an intern, and the intern is taking direction from you. I think back to a summer intern I had who’s still a very good friend of mine, who’s one of these folks who you dole out little tasks at first. Little simple things where it’s obvious and contained and maybe you give them real direct guidance at first, but eventually, I started giving him more nebulous tasks.
I remember the first time I gave him something where I didn’t have a preconceived notion of how he should fix this. I didn’t even necessarily have a fix yet, I’m just like, "Hey, go try this. Go figure it out." A couple days later he came back with this super clever, unique solution, and I just started laughing. I went straight over to our boss, and I was laughing, and I told him what happened. I said, "If you don’t give this kid a job offer, you failed. You’ve got to get this kid on staff." We hired him. He's had an excellent career ever since. He’s a great guy. Maybe it is part of that. Maybe it’s us learning how to work with these tools effectively and understanding, then, what our role is.
I do have some concern that some developers are just going to be like that bobbing chicken and just be like, "Looks good to me, looks good to me, looks good to me," without understanding you’re still responsible for that code. You’re still responsible for what that does in the system. As my co-author Dan likes to say, "You’re not a passenger, you’re the pilot." You have an active relationship here that you have to entail. To me, I think that part of it is how do we work together to generate this code potentially faster than we did in the past? Then understand how do I shape that into a full software system, and then how do I make sure that can get to production in a reliable way?
Ken: It’s funny because you touched on it in a couple different answers, but like you said, for example that typing’s the easy part. In the book, I believe there’s things like soft skills, communication, stakeholder management. How do those belong in a fundamentals of engineering book?
Nate: Because we don’t teach them to you. Again, I had an undergraduate degree in computer science. We never talked about working effectively with other people, how to communicate, how to write up your ideas, how to present. I’m often asked for book recommendations, especially around architecture. I’ll typically recommend one of Neal and Mark’s books, either Fundamentals of Software Architecture, Software Architecture: The Hard Parts, but the other one I recommend, and I’ve made my grad students read this now for several years, is Dale Carnegie’s How to Win Friends and Influence People.
I was actually at an event, made that recommendation. A year later, same event, someone came up to me and said, "You recommended that book to me, and it changed my career." I’m like, "Outstanding. I love to hear that." I don’t know when the first edition of that book came out, Ken, but I know it’s older than you and I. You and I have been around the sun a few times, and that book is evergreen. I do think that’s another really important part of this. I had a good friend of mine explain this to me once, and we have this tendency in software to focus on very short-lived skills. Skills have a lifespan. You think about something very tactical in many cases, like a JavaScript framework. Those tend to come and go pretty quickly. I'm not even sure what today's hip, modern, UI framework is, and I've lived in that space for a long time, but haven't been there in a while. Those tended to be fairly transitory. A new one comes along every few weeks or something to that effect.
Now, I guess we'd say coding agents seem to be that way. Every time I turn around, somebody's got a new model that's faster and better or a new open claw and all these other fun things. There's other skills that are going to last your whole career. How to work effectively with other people, how to communicate, that stuff really matters. I've seen a lot of engineers get really frustrated because they look around and they go, "Why'd that guy get promoted? I'm a better engineer than they are." It's like, "Well, are you? Let's take a step back. Let's look at what is it that they can do that you can't do."
As much as we love to focus on the hard engineering things, those soft skills really matter, and they're evergreen. You're constantly going to be working with other people. You're constantly going to be communicating whether that's over email or over slack or building a presentation. In general, if you look at the people who are still moving up the rungs, if you're not, it's because they're better at some of those other skills that you might be ignoring.
I ignored those early in my career because I thought the only thing that matters as a software engineer was, "Well, I know more about databases, I know more about Java," and didn't fully appreciate how important some of these other things were, and understanding that we don't like politics, but it happens. Anytime you got two people in a room, you got politics. You got to learn how to work with it. You got to learn how to use it to your advantage. You got to learn how to manage your manager.
These are those kind of things that really inspired us to write the book, is we learned that the hard way. We learned that through trial and error. This is the book that I wish I'd had when I first started. This is the book that I would hand to someone if they just started on my team. Like, "Here's the things I need you to know. Here's the bigger picture of what it means to be a software engineer, not just a person banging out code."
Ken: I've been thinking a lot about it. Again, you and I, of course, were chatting beforehand where-- AI changes the stakes, but not the rules. Nathen Harvey from Google's DORA project, he uses an example and he sells it way better than I do. The basic idea is if you put an amplifier on a bad band, it's just a bad band louder. It doesn't make it better. AI is an amplifier. It either makes more people enjoy your music or run away from it, one or the other.
In that vein, think about testing strategy and that kind of stuff. You make a comparison, a testing strategy versus writing tests. What's the difference?
Nate: To jump off on your thought there a little bit about making a bad band, I'm not a particularly handy individual. If there's a video and it's literally unplug this and plug this in, I can probably do it, maybe, but by and large, if there's a project around the house, I'm going to try to find someone to do that that's a professional that knows how to do it. The analogy that I would make is giving me a nail gun does not mean I can go build a deck. That would end badly for me, for the deck, for anyone that stood on the deck.
Ken: I'll interrupt you to say that would end badly because you're supposed to use screws on a deck. [laughs] Oh, there you go.
Nate: See, exactly. I think these tools, without the proper training, knowing how to apply them, knowing the bigger picture of, oh, that's the naive way to solve that problem. I know from experience that actually this is the better way to solve it. Don't use that construct, use this construct.
There's callous examples of that. We've already seen where yes, an AI tool will generate code that works, but not as good as it could. It's not the ideal way, not the platonic way to do it. Not the best way. Although I will say in software, there really is no best way. There's just least worst in many cases. This is a suboptimal solution to that problem. A naive solution that will work, but could be better. I do feel like that's part of the equation.
Now, back to your actual question because this is the joy of these things. We take the question we want. Writing tests is important, but understanding, what do we need to cover? How much do we need to cover? You know what I mean? I've had debates with people about code coverage numbers, and I think when I was younger, I was very dogmatic that everything must be 100% across the board. No, if, ands, or buts test everything. I think with age, you learn pragmatism. I'm very much of the opinion that you need to be ruthlessly pragmatic about any of these things, that dogmatism is dangerous, and so what are we testing? What are the important things to test? What could we maybe, "Man, we don't have to worry so much about that," and understanding that there may be a point where to get another percentage point of coverage. The level of effort required to do that might not be worth it. We need to make a judgment call there and understand that, "Well, but this part's really critical. We've got to make sure we're testing that." I think with AI, that becomes incredibly important.
I've had various discussions with folks I know you have too, around, is code going to be a durable asset in the future, or is it just going to be specs? I'm not sure where to land on this yet. I cling to code, but I sometimes take a step back and say, "Well, is that just because that's what I've always done, that's how I've always done it, so do I need to think about this differently?" I can certainly see a space where the spec becomes the thing, and we just constantly regenerate the code. Do you have some concerns about that? Not the least of which the cost of how, what's the take in terms of time, tokens, money, energy, et cetera, to actually regenerate a non-trivial code base on a regular basis.
Then there's also the verification side of it. If I do have a robust test suite around this code, then it isn't as important that it's non-deterministic. I don't really care if I get the same code as long as I get the same results. That's all our customers care about. If I give you these two inputs, do I get the same output every single time? That's what matters. I had this conversation, actually, last week with someone, and he made a really good point. He said, "Listen, I can write the tests that show you that the system does these ten things, but I can't write tests that prove it doesn't also do these other five things." I thought, oh man, so we're going to have the--
Ken: That's deep! [Chuckles]
Nate: I'm just going to lop off a few pennies and put it in this other account. I'm thinking to myself, "Boy, that's a really interesting attack surface area that I hadn't fully considered." I can't prove that the software does things in addition to that. If you and I are working on a project together, and somebody sneaks in some code that throws money into an account, let's just say. Let's use the Superman example. Okay, we're going to see that. Somebody is going to notice it. Somebody's going to be like, "Hey, what are you doing here, Nate?" Or, we all got to be in on the deal.
If I'm not looking at the code, if I'm treating the generated code like Java bytecode. I have looked at Java bytecode, but very, very, very rarely. I can understand that conceptually, we may get to a point where we are writing in more natural language, and then it's generating maybe straight down to machine code. I don't know, maybe we don't even need Java code if it's just going to be obliterated all the time anyway. Maybe it can write straight to bytecode. I don't know.
There's all sorts of different ways to get to the result. Then I also lose the visibility of, "Is it doing some other stuff? Has it done some things maybe that I don't want it to do, or didn't intend it to do? How do we close that gap?" Because you can't write infinity tests. I don't quite know how we'd ever fix that [chuckles].
Ken: That's a really interesting thing because again, some of us are of a certain age. You talk about bytecode and what have you, 30years ago [chuckles]. When using compilers, we were very careful about checking did the compiler do what I expected it to do. Now we just trust them. Do you think AI is going to-- I think eventually, or would, but what are your thoughts on how close we are to saying, it's a compiler, I'm just trusting it?
Nate: I think we're further away than maybe some people would say. I'm always cautious when you hear bold proclamations. I need proof that goes along with that. The bigger that proclamation, the more proof I want to see. It's also important to understand that, where these bold proclamations coming from. In many cases, they're coming from folks who have a huge financial interest in it being true.
That they're trying to raise a lot of money based on very, very high, stratospherically high valuations that require this to be true in order to justify those valuations. I think it's always good to be a little skeptical. A couple of grains of salt. I suspect at some point, and I don't know if this is six months from now, or 15 years from now or somewhere in between, we probably will get to a point where we have an awful lot of faith in what these agents are doing. Over time, as they improve and as they get better, we get better validation and verification pieces to this equation, we probably will start treating it as just, I don't know, I haven't looked at that in years.
It reminds me a lot of when Hibernate first came out. I had a good friend of mine who was really, really good at SQL. I'll admit I was terrible at SQL. I could select star stuff, but once we got into inner joins and outer joins, my mind just didn't map to that for some reason. Mark was great at it. He was telling me when Hibernate first came out, he was super skeptical, and he would look at the SQL regenerator. He's like, " That's garbage. It shouldn't do it that way."
Then, eventually he just realized, "It's got a good reason. I'm going to trust it because it's better than what I did." Even me trying to write optimized SQL, it was more performant. You've proven it to me. I suspect we'll go through a similar phase where, gradually over time, our cynicism, maybe, would start to fall away, and we'll go, "Okay, you've proven to me time and time again," just like with that intern. When Joe first started again, I gave him very easy tasks, very straightforward, almost told him exactly what to do. By the end of the summer, I'm like, "Go get it. You got it. Go solve this problem. Call me if you need help." I suspect we'll see a similar trajectory here over some period of time.
Ken: You mentioned at the top that reading code is a fundamental skill, and there's no question AI can generate code faster than we can read it.
Nate: Absolutely.
Ken: Is that still a fundamental skill? What is the bottleneck? How do you contain the throughput that those investors want?
Nate: I certainly think it is, at least in this interim period until we potentially get to the point where we are just writing specs and not worrying about what comes out the other end, per se, as long as, again, the tests all pass. In the meantime, I think it's even more critical. I think it's more likely that we're going to read even more code than we did in the past because it does allow us to generate things much more rapidly. Your level of investment there may change, and there may be parts of the app where, "I don't really care. I'm probably not going to go look at the CSS. As long as it looks right, that's fine."
If I'm using AI to generate our pricing algorithm or to make a change to our pricing algorithm or something that is really critical, I would argue you're going to want to give that a pretty good look. You're not just going to want to, "YOLO," that and see what happens. I do think it's still going to be important for us to be able to look into that. I am absolutely of the opinion that we're going to experience an uptick in shadow IT.
I think every engineer who's been doing this for more than about five minutes has had that experience where, in the past, it was always a spreadsheet. Somebody swizzled together a spreadsheet just for their own needs, shared with a few other people, and before you know it, this whole team uses this spreadsheet, and $1 billion a day is going through it. Then on Tuesday, there's a problem. Then it lands in our lap. I'm like, "Okay, IT people fix this problem." You're like, "I don't even know this exists. Where did this come from? Who wrote it? Oh, I don't know some guy who used to work here five years ago." Great.
You have no idea what it is. It's a bunch of stored crocs and a bunch of weird stuff merged together. There's baling tape and duct twine all around, or duct tape and baling twine, I guess, however you want to phrase it. Then now we got to disentangle all that, figure out what's going on, figure out what the requirements are, what's it supposed to do, has it ever done it right? I am very much of the opinion that that's going to land hard soon for some value of soon, then we're going to have one of these dropped in our laps. Guess what? You are going to have to peel it apart, and you're going to have to ask, "What is it doing? Was this right? This is wrong?"
I think for the foreseeable future, reading code is definitely going to still be in our wheelhouse, whether we like it or not. I know how frustrating it is. I know most of us hate reading code. Certainly, we hate reading other people's code. Like, "What idiot wrote this?" It's like, "Oh, wait, that was me from a couple of months ago." Again, another very common experience for all of us to have had.
Ken: We worked on a project many years ago where we finally had to put a rule in that said don't put in curse words because people would put in a comment, "I can't get this to blank and work." It was like "Would you please not do that. It's not going to get taken out." You think it's going to, but it's not. It's going to live, but anyhow.
Nate: To do, fix later.
Ken: Yes, exactly. I used the AI overlords to help me do some of the show notes for this, and they put this question in here, and usually, I rewrite 80% of it. This I thought was unfair and a little funny that it thinks you should be able to answer this, so I'm going to ask it to you anyway.
Nate: Excellent.
Ken: What does a software engineer's role look like in three to five years?
Nate: (Sighs) Wow, that's a great question. I suspect we're going to do even more people stuff. Then I know a lot of people got into software because I don't want to deal with people. People are messy. My wife sent me like an Instagram thing where this guy said, "I'm just not a big fan of people, and then I got into a career that's basically 100% of me working with people. That's my bad." I do feel like some people got into software thinking that it was an escape from messy people things, only to realize that that's all this is. I've been on a lot of projects, been around a lot of companies, and I don't honestly think I've ever seen a project fail because of a technology decision.
The problems we encounter are 99% of the time there are people problems. These two people aren't getting along, these two people aren't talking, there's just a divergence here, or we're not all on the same page. You think about a lot of the ceremonies and things that we do from an agile software perspective. A lot of what that boils down to is trying to get us all on the same page, getting us working together more effectively, and then tear down those walls, get rid of the trenches between different departments.
I think you and I are old enough to remember the phrase "throw it over the wall to QA" and "throw it over the wall to production." Ironically, beginning of my career, I did have a QA person sitting right next to me, so I could literally say, "I'm going to throw it over the wall to Brenda," and she was going to test the stuff I was working on.
That worked. I think it's going to be even more of the people stuff and more of the "So, what should the software do in this situation? Tell me more about that."
I joke sometimes that our job is almost like psychologist. How did that make you feel? What can we do more about that? How do we fix that? It isn't as much "Wow, this is taking too long. How do we optimize this algorithm?" There are pockets of that, but few and far in between. Much of this really is, "Okay, you and I need to have a conversation. We need to figure this out. We need to work through this. We need to do that trade-off analysis to say, "All right, you can't turn every knob to eleven," so how do we get this balanced in the way that's least worst for this particular outcome?"
Ken: Someone's listening to this episode, they come out on Thursdays just through standard distribution. Most of the listeners are Thursdays and Fridays. Monday morning when they go into work, what do they do? What's the action, the first thing they should take to restart this journey?
Nate: I think it's having an open mind. Software engineering is this constant game of learning new things and picking up new tools, new languages, new frameworks, and I look at this as not being really any different than that. It may have a profound effect, it may fundamentally change some of the things we do day in, day out, but I would say that's kind of first and foremost, is have an open mind, try these things out and explore and see what works and doesn't work for you.
Again, I think so often we get stuck in this, I have to follow this recipe or I have to follow this path, but you can shape that to your needs and what works best for you in the same way that I encourage people to change your font. There are a bunch of different really good monospaced fonts out there. Try a different one. It might help. Your eyes might like that one better. Try dark mode, try light mode. Tweak your environment. You absolutely have control of that, and so you should make it your own. I think so often we get stuck in the "No, I just have to sit in a chair like this." It adjusts, so move it up, move it down try different things. Try different things. What I think is so amazing right now is all of these different models, some do things better than others.
Try it out. See what happens if you ask Gemini this question and ChatGPT and Claude. Which one gives you the answer you like best? You should, at least this has been my experience, "Oh, I really like Gemini for this, I really like Claude for that," and so take advantage of that and use those tools where you can. I think the worst thing you can do is just say, put your hand up like, "No, no, not for me. This isn't going to change anything." It is. We don't quite know how yet, but it's better, I think, to be playing with these things, exploring these things, seeing how it fits in your world, how does it mold to your hand, and then start exploring that.
Ken: I think if you put up your hand and you say, "No, this isn't for me," the thing that's going to change is going to be your employment status.
Nate: I agree. Absolutely agree.
Ken: I want to thank Nate Shutta for all your time. Where can people find the book, hear more from you? What's going on in your life, to steal from Hot Ones? (chuckles)
Nate: Beyond my day job here at Thoughtworks, so I get to go to a handful of events here and there. Dan and I will actually be at Arc of AI here shortly. I believe we're going to end up doing Venkat's conference in the fall as well Dev2Next. We've certainly pitched some other events as well. I've been just overwhelmed with the reception that we've had when we present this as workshops and as talks. We've had a bunch of stay-at-homes, which is really gratifying as a presenter. There's nothing quite as fun as that, honestly.
We're on the socials. Dan and I do periodically podcast. We had a bit of a lull there as we got really, really busy between travel and other things. We actually owe people some new episodes, but we do have a companion podcast that we're trying to spit out there as well. I guess that's the rule, that everybody's got to have at least one podcast these days.
We're on all the socials and LinkedIn and all that fun stuff. Please engage and find the book on O'Reilly. You can find the book on Amazon. If you've got an O'Reilly subscription, feel free to read it there. It is print on demand. If you want a dead tree, I got some dead tree copies over my shoulder. We do try to give dead tree copies away when we're out in the wild as well, so track us down. We're always happy to do that.
Ken: All right. Thank you again for your time.
Nate: Thank you. Thanks for having me.