Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Lessons from inheriting another team’s codebase

Have you ever found yourself with an entirely unknown codebase, a new team, and a schedule to take over an application from an offshore team? What do you do, where do you start?

That's the situation I found myself in with a team at ImmobilienScout24, Germany's leading digital real estate marketplace. This is a summary of how my team approached this handover, to ultimately start maintaining and extending an existing application with a new team.

Where we started

Your circumstance will dictate the approach you take to transition a codebase between teams, but I’ll start by outlining the situation we encountered.
  • The original development team (at the time down to four developers) was in a remote location but in the same time zone
  • The new team was co-located on the main site. There were to be six developers; eventually, one to get started
  • There was a relatively low level of documentation in place (e.g., no README files)
  • The remote development team was under delivery pressure when the information takeover started, so there was little-to-no availability for questions
  • The new team setup was volatile throughout the transition (people left, people joined)
  • In terms of size and complexity: The codebase at hand was a Scala and Play web application, with ~5.800 lines of JavaScript code and ~8.300 lines of Scala code, a comparatively small codebase. There was low test coverage in the frontend, decent coverage in the backend. The application was integrating three Amazon SQS queues, a MySQL DB with seven tables, and four team-external, company-internal RESTful APIs. The code was about 1.5 years old at the time.

Timeline

Below is a timeline of the whole transition. This is mainly to give you an idea of how long this can take under the circumstances in this particular case. While this isn’t a blueprint for all such transitions — after all, it’s not every day that we take over a code base, especially in a planned way, as we were able to here. But it’s useful to see what timeline we ended up with in this situation.

Let’s dive into what we did at each stage of this takeover.

1. Assessment

Understanding the codebase

The team maintaining the code at the time was busy delivering for a deadline, so we were initially left to our own devices.

We started by exploring the “entry points” into the application and dug through the call hierarchy from there. The entry points, in this case, were the web controllers and message stream handlers. At first, we just tried to create an overview of what was actually happening, making notes and sketches of the call hierarchies for reference. Then we tried to see patterns among those hierarchies, based on the naming of the components and what they seemed to be doing.

Next, we collected questions for the existing team, mostly by putting up post-it notes on a flipchart in the team area. We also created basic documentation for our current understanding (README files, diagram sketches, etc.)

Then, we worked on one story that touched quite a few places in the code but mainly required changing existing things, not coding something completely new. That helped get into the practical implications, as opposed to staying in theory by just reading the code.
 

Outcomes:

  • The high-level idea of entry points and flow of data
  • Systems overview (what other systems the application’s talking to, and which teams are responsible for them)
  • Database model
  • Understanding of the deployment process and the existing level of automation

Checkpoint

Starting early had given us a head start. We stopped after we had gained some understanding of the codebase and checked the timeline: There were nine weeks left of availability from the existing team. We asked ourselves, “What have we learned so far? What’s the impression of the code and setup? Will we need the full nine weeks to know enough to take over?” We felt we had a good first grasp on what was there, decided to take a break and get back into it approximately three weeks later.


Assessment: Lessons

  • Some blindness, in the beginning, can actually help, sometimes it’s good not to have somebody there who spoon-feeds you details of what the system does. Digging into it by yourself for a time-boxed period might uncover questions the current team is taking for granted.
  • Don’t worry about jumping back and forth between abstraction levels — there’s no linear way to do this.
  • Things that helped us — and will help future maintainers of your code:
    • A “README” file describing the minimum steps to get the application and tests to run
    • A script for a simple one-step local start of the application
    • Structure your code by functional modules instead of packages named “services,” “metrics,” “repositories” - not only is this a good architectural practice anyway, but it also helps a reader to navigate the codebase and understand its functional scope.
    • A high-level overview of all the dependencies
    • Documentation of the contracts with other systems through your tests and test data

2. Meeting of the teams

When we finally met the current team face to face, we were able to check the understanding we’d gained so far against reality, using the created documentation and list of questions. We tried to uncover known and unknown unknowns by walking through the created sketches and the code.

Strong focus on the history of the codebase, as opposed to what’s currently there: Why are things the way they are? History and context are a lot harder to deduce from the existing code, or from the documentation of what the application looks like today. More often than not, decision rationales are not documented. So it’s important to talk about them while the people who witnessed that history are still available.
 

Outcomes:

  • Correction of the understanding gained so far, more documentation
  • Understanding of the trickier, non-obvious parts of the system
  • List of tech debt — what is it, how did it come about, what is its impact, are there ideas how it could be removed
  • Decision history, to empower the new team to challenge things in a more informed way
    • Example: "After we switched to storing the data in this other system, we wanted to clean up after ourselves and remove all traces of the old way. But the Product Owner deprioritized that and prioritized feature X instead."
  • List of constraints
    • Example: “We cannot use API X for this because it behaves in this particular way that doesn’t fit our use case. That's why we had to make this part of the code so complicated and store this extra data.”
    • Example: "Database changes on table X cannot be automatically deployed because the database is currently too large and migrations on that table take half an hour."

Meeting of the teams: Lessons

  • It’s worth reiterating that context, history and past decisions are things that cannot be derived from just reading the current state of the code. So context and history should be a big focus in this phase.
  • If a team already has one foot out of the codebase, they might not be so motivated to collaborate in the transition. Fortunately, this wasn’t the case here, but it is quite common. In spite of those possibly difficult dynamics, don’t be afraid to ask “stupid” questions or to get caught misunderstanding the system when you play back what you learned by yourself. “Stupid" questions are often a great way to uncover the most interesting parts of the code’s history.

3. Kickoff

After the initial preparations — where only two developers from the new team were involved — we organized a kickoff day to get everybody else on board.

Participants were the product owner, developers from the new team, and in our case, a special guest from a team maintaining our most important dependency system. Having people from the existing team participate is highly recommended of course, but was not an option in our case for logistical reasons.

The agenda included topics around the product, technology and forming the team.

We talked about the high-level vision and goals of the new application, the history from a product perspective, and KPIs used to measure its success. We also got a demo of the current features and had a look at the immediately upcoming backlog.

In terms of technology, we created an overview based on the previously created documentation, got an understanding of the most important dependency system, and walked through some of the code together.

Then, we talked about next steps and how to work together as a team.


Kickoff: Lessons

  • Find the right level of detail. It’s better to stay at a high level and leave some things out than to go into too much detail in a short amount of time. There will be time for details in the upcoming daily work.
  • Don’t expect participants to remember every single thing presented in the workshop. Find ways to repeat things in the upcoming weeks and connect them to the day-to-day work. For instance, have some visuals prepared for the workshop that can go up in the team space as recognizable “maps” afterward. They can be used to reiterate what was presented in the kickoff.

4. Parallel development

Finally, we had a period where the new team and the current team worked on features in parallel.

We agreed to keep the development practices and processes of the existing team in place as long as we worked in parallel, to limit unnecessary friction and focus on a stable transition. For instance, the existing team preferred using feature branches. The new team stopped doing that eventually but adopted the practice without question during the parallel development. We were also able to scratch the surface of some changes we wanted to do, to be able to understand the good reasons why these things hadn’t been done yet or had been done differently.

Previously, we’d mostly focused on the code and its functionality; this phase was a lot about actually releasing and deploying it. So we introduced “initiations” for the new team, activities to make sure that everyone became familiar with operating the application. For instance, we made sure the new team was responsible for at least one production deployment. We also had to solve upcoming production issues by ourselves, with the existing team as a fallback in the background.

In the final week, we had another face to face with both teams. This allowed us to discover any “unknown unknowns” we’d yet to discover, review the larger chunks of tech debt one final time, answer any remaining questions and check that all were updated with the new team’s contact information (alerting, etc.).

Reusability of this approach

Which of the techniques mentioned here make sense for you depends on the situation and codebase in front of you. However, the methods described hopefully provide you with a list of options.

While I don’t believe there is an ultimate checklist to plan a handover, the following factors should be considered in your planning, as they will impact your timeline and approach:
 
  • Size of the codebase
  • Style of the architecture — an event-driven, highly asynchronous architecture will be a lot harder to untangle than a straightforward web application
  • Business criticality of the application — determines how much risk can be taken, how many unknowns are acceptable
  • Size and stability of the team(s)
  • Availability of the existing team to guide, explain, answer questions directly
  • Learning style and experience of the developers first analyzing the codebase
  • The capability of the first people on the ground to transfer their knowledge to the rest of the team

Final conclusions

In spite of a very volatile “new team” setup, all in all, we had enough time to explore things in parallel with the existing team and the measures we took enabled a smooth transition. The new team did not cause any out-of-the-ordinary incidents after they took over.

Having to work by myself initially, to figure out the application, was a constraint that resulted from the lack of availability of the existing team members. In hindsight, this worked really well to uncover some of the trickier questions. However, I’ll be careful not to assume this is the best way to go next time, as pairing definitely would have made the first phase more efficient.

Overall, I would use a similar structure and approach again, but then adjust the timeline and actual content according to the application’s nature and its business criticality.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights