Essay

Managers Returning to 'Coding' with AI

We have been talking with other engineering leaders around me about the impact of AI in our profession. Although nobody knows what will happen, I kept thinking about where I’m standing and what others are doing today.

There is certainly a trend among managers to go back to building with AI. People who haven’t been working as an individual contributor for longer than a decade find themselves building software again; some for production, some for open source, some for fun. All have a mixed bag of reasons.

I haven’t been coding for almost five years, and now I’m back to contributing to the codebase. I want to uncover why I’m doing it, where I stand and its impact on myself and my team. I believe where I stand matters to my success as a leader, as well as where I’m headed, as many changes will happen on where I stand today. I see the road ahead is (once more and has always been) full of mud, rocky hills and a lot of pain.

Based on what I observe, there are a few groups of managers I can be part of.

Managers who

  1. are expected to “code.”
  2. are “coding with AI” for self-entertainment.
  3. are “coding with AI” as an extra to stay grounded.
  4. are nervous/curious/excited about new tech and feel like they need to learn to lead their teams better.
  5. feel the FOMO.
  6. are doing whatever they are doing pre-2025.
  7. others.

Before I dive into each group, I want to clarify one thing: I put the word ‘coding’ in quotes. Because it’s not the coding that we know pre-2025. The “coding” means that it’s performed by an AI tool (with LLM); the AI writes the code, not the person. The person perhaps tweaks a few lines here and there by hand. That’s it.

Managers are expected to “code”

I think this group is straightforward. A lot of articles are written about this group.

With or without AI, these managers are expected to work like a combination of an individual contributor and a manager. This group was always there, but small in the late 2010s and early 2020s. However, the industry expectations have recently shifted toward this direction.

Instead of being only a people manager, they also code. Now, with powerful LLM models and good agent harnesses like Claude Code, Cursor, etc., this group’s IC work has become easier (or chaotic).

Along with the upsides, there are many downsides to managers actively coding features due to unbalanced power dynamics within the team. Many problems pop up when the team is more competent than the manager (which is often the case), but the team can’t argue against the authority’s decisions.

(A small note: there are many managers looking for jobs and hitting this expectation, finding themselves sharpening their technical skills. It’s very challenging to do it while searching for a new job. There are also many like me who are taking precautionary steps in case we are laid off.)

Managers are “coding” with AI as an entertainment

This group is probably the least harmful of all. One can argue that it’s the best.

For these managers, building software with LLM is a hobby, so to speak. They play around with things that don’t contribute to production code, or they work on their hobby projects outside of work.

There are not many downsides to the people at work, other than the time spent on these. The benefits are much higher than in the first group: these managers see what’s out there and can judge better when people talk bullshit without incurring the penalty of authority imbalance.

Managers are “coding” with AI as an extra to stay grounded

This group acts on a felt necessity. They are also the combination of the first two groups.

Nobody directly expects them to build with AI. They might or might not have a hobby project for entertainment. Often (not always), the expectancy comes from the inner world and pushes them to stay grounded. However, these managers often don’t get in the way with production-level building (that’s what separates them from the first group).

At most, they pick up bug tickets that otherwise would not have been prioritised. Their drive is to stay grounded in the codebase and team processes while learning how to use available AI tools. So they can understand how the AI shift affects their teams.

I felt close to this group for a while. Not because I don’t trust my team, but rather my inner engineer was telling me not to lose touch with the craft. I couldn’t justify any other reason. But then, I thought my motivation and intention were different.

Managers feel nervous/curious/excited about new tech and think they need to learn to lead their teams better

I put myself in this group. Therefore, I’m going to write directly from my perspective.

The industry is changing, and I’m trying to navigate myself. As we are still in the early phases of AI’s impact on software development, no best practices have been established yet. The industry hasn’t had enough practice to understand what’s best. What we say best today changes tomorrow.

I feel an urge to follow the shift closely to lead my team better. Although my team follows the changes with hands-on practice, I still want to understand and learn together with them.

Also, this change is not something I can ignore. When the cloud infrastructure came as a new technology, your life as a UI engineer might not have changed. You got an API that’s hosted somewhere. The same with the introduction of iOS. The iPhone changed the world, but if you were a backend developer, you offered APIs. You just had yet another client and had to maintain APIs longer than you’d like (I’m oversimplifying). With AI, all software engineers are changing. And that’s the main problem.

As a manager of this group, I don’t know how to set my expectations with my team. I don’t know how to give feedback or mentor/coach them. Because not only what engineers build has changed, but also how they work has changed (and it keeps changing every day).

Before, it didn’t matter much to me what kind of software the team was building. I could see the missing pieces and see how the team works; I could find ways to evaluate their performance, collaboration, and communication. I could coach and share feedback on any of these topics to help them become more productive as a team.

Their tech stack was secondary. I could rely on establishing a feedback culture where they share feedback with each other on technical topics more than I do. Also, I could learn the tech stack on the way, which always worked out so far.

Now, all my grounds are weak.

That’s why I have no other choice but to learn how to build with AI and find out what I can expect from others. When someone doesn’t adapt to new tools, I need to know what they are missing out on so I can explain the impact. When it comes to using these tools, the team will (for sure) be better than me, as it always has been. But I need to push them to their limits and coach them to achieve their goals. And that’s all about the fourth group.

Managers have FOMO

The fifth group’s main driver is the fear of missing out (FOMO). This is the group overburdening others. Maybe they won’t admit it, but that’s the group that sends you a Slack message copy/pasted from ChatGPT without changing a character. They follow the latest tool or news without trying any tool themselves or knowing what exactly the problems their teams have. They act with hype.

Although it’s okay to receive a message written by AI, you never know whether they understood your message in the first place. There are gazillions of tools popping up every day; the hype is real, and it’s going to stay that way, one way or another. You can easily spend hours and thousands of dollars without creating any value for anyone. This group of managers are mostly falling into this bucket.

I hear that various managers (also director and VP-level) are sending a huge pull request, fully developed with AI, for review. I don’t know which is worse: dealing with the always-on AI talk of an ignorant person, or dealing with the work of an authoritative, incompetent person using AI.

As a manager, it’s difficult to stay mindful about not joining this group while still building with AI, especially when motivating the team.

I can also admit that we can embed FOMO in any of the groups above, but I want to clearly distinguish this group: their purpose is not to create value but to create the illusion of value. You can be the judge of it. If you look around, I’m sure you’ll find one that fits the criteria.

Managers are doing whatever they are doing pre-2025

This group is the opposite of FOMO. They avoid whatever is going on. They learned how to do their job over the years and built time-proven habits. So, why change? Maybe there is nobody in their organisation driving any change; maybe they have a very strict process for making small changes; maybe something else; they have reasons.

Regardless of why, these managers see this AI trend as similar to any JavaScript framework trend that bursts over time, or as an NFT trend that will be long forgotten.

I can’t tell what will happen with AI in the future, but it’s already clear to anyone that coding as we know it is dead. Whether you accept it or not, nobody needs to write in any programming language, as their native language is enough to define what they want to build.

Managers have a mixed bag of reasons

The last group is the mixed bag of all, and I think it’s the biggest group (or maybe I’m being lazy and not grouping further). Pick one or two properties from the others and define this group.

It’s actually difficult to belong solely in any of the first five groups because humans are complex creatures.

For example, one day I am a member of the fifth, one day fourth. It depends on what happened that day. If a new LLM model were to arrive, I want to give it a try and share the announcement with others. If there is a new tool that interests me so I can finally build the pet project that I’ve held hostage for years, I’m in the second group for a few days. I think this mix of bag approach is completely okay. However, with a caveat, an important one.

To an organisation, a manager has to be clear in where they stand. Otherwise, people get confused. For example, my organisation has to know that I’m standing in group number four when I work. In my free time, I can be in any group I want. Additionally, if I’m changing my position, that also has to be clear to others. If I’m expected to “code” with AI or if I’m doing it for fun, we can all align our expectations with each other, adjust ourselves, and can share fair feedback with each other.

So, which group is better than the others? I don’t know. I think that’s not the point. What matters to me is knowing where I stand today and figuring out where I want to be. For you, maybe you’re happy with where you are. If so, I’m happy for you.

When I step back from all these, I always believed that change is constant. As a manager, I had to learn the skill of adapting to changes. One of the biggest changes was reorganisations or pivots in product, in my experience. They were always painful, dreading, and a lot of work. But I couldn’t imagine how the AI would change everything.

What a time to live in.

I thought I’d seen a lot in my career. But actually, I’ve seen nothing, and the world is even more ambiguous than yesterday. The thing is, there’s no other alternative but to deal with the world we have. And nobody knows how this change will shape the world. If anyone claims that they know, they are lying (or trying to sell you stuff).

So, don’t take anyone’s word for it, including mine.