Pattern Radar February 28, 2026
In this post, I introduce the idea of the “Pattern Radar” as a means of incrementally improving a legacy code base. If you want to skip right to the practical part, go here.
Intro: working with legacy code
Have you ever had the “pleasure” of working on a code base that has been in development for 15+ years, has had various attempts at improvement perpetrated on it, has survived them all, and is now ready. For. A. New. Feature? To be implemented by you, who just joined the project?
If you haven't, let me give you a taste of what it's like. You start out thinking it can't be too hard; it's only a new attribute in a database entity and a bit of logic over here. Or is it over there? Wait, why are there 5 classes with almost identical names in 3 packages? OK, let's look at the call chain for this one to see where it is used. Invoke the Find Usages function in my favourite IDE, and … what's this? 30 call sites? Erm, the first one doesn't look plausible at all, it's actually calling a completely different service. WTF? Oh, I see, the services are implementing the same interface, and the IDE lists all uses of the interface. So I actually have to look at all of them to see which are the ones I want.
(Fast forward a bit.)
So, I have finally found the correct place to change. I think.
Is there a test for this class?
Yes, there is. It's an … integration test that connects to an external test
database?
Hm, OK.
What does it do?
There are two test methods, one named editObject(), and one named
editObjectNegative().
What do they do?
They call the editObject() on the service and assert various things.
OK, looks like the “negative” one is testing error scenarios,
and the other one is testing the happy path, but I'm not entirely sure
because some of the assertions seem only tangentially related to the logic
under test. My team mates can't remember why they are there either. O. K.
How do they do test setup? Where, for example, is the data coming from? Search, search. Hm, seems to be magically there. A few conversations with veteran team mates later, I know that the test data is maintained in a giant SQL file in a different repository. Now, if I add the data I need there, do I break all the existing tests? No, good, I only broke 50 or so of them.
That's enough of that. This was just to introduce you to the environment that sparked the following idea. Because the thing is, the team working on the application is actually quite motivated. They want to improve the code base. They have already made some attempts at this in the past. For example, I found two different Hibernate entities representing the same class of data, one of which was much cleaner than the other one, which was marked as deprecated. But which one do you think is used by most of the code? Right.
For me as a newcomer, these partial improvements sometimes actually made things worse. Having seen two or three ways of solving the same problem in the (pretty large) code base, how do you decide which one to use when you encounter the same problem again? In the case of the entity class, it was at least clear what the plan was. In other cases, not so much.
So how to improve this?
For some improvements, we wrote backlog items and implemented them in one fell swoop, just to get some big blockers out of the way. For example, we had a mixture of JUnit 4 and JUnit 5 tests in one of the Java backend services, and I used OpenRewrite to convert all of them to JUnit 5 (which worked like a charm, by the way; OpenRewrite is a fantastic tool for this kind of work).
If you can do that, it's great. However, addressing all code quality issues in this fashion is unrealistic. We would be busy for half a year doing nothing but quality improvements – and step on each other's toes for the whole time because most of these improvements, if done in a big bang, touch large parts of the code base, so this would produce an endless stream of merge conflicts. We needed a way to improve the code gradually. This means we needed to ensure
- that team members had (i.e., took) time to make improvements during their normal work and
- that the team agreed on what actually constituted an improvement.
As to 1., we had already instituted the Boy Scout Rule, “Leave the code better than you found it,” and it was kind of working. If we could just find a way to align on 2., we'd be in a good place. This is where the Pattern Radar comes in.
Introducing the Pattern Radar
The Pattern Radar is inspired by Thoughtworks' Technology Radar. It lists recurring patterns that we find (or would like to find) in our code base, and classifies them according to how we want to deal with them. Our classifications are as follows.
- TRIAL: We want to try this. It's not yet clear whether the pattern is
suitable for general use.
Expected behaviour: Present trial to the team and together decide how to proceed. Don't apply directly outside of the trial; wait for the experience report. If it turns out not to suit us, we'll roll back the trial. - ADOPT: Tried and tested. We want to apply this broadly.
Expected behaviour: Use when appropriate. Prefer this to patterns in other categories except for special circumstances. Refactor occurrences of PURGE patterns into this when you come across them (Boy Scout Rule). - PURGE: Was used in the past and may still be found in the code base (maybe even broadly), but should be replaced by a different pattern.
Expected behaviour: Don't use in new code! If you come across it, replace it with an ADOPT pattern (Boy Scout Rule). - KEEP: May be found in the code base in certain places. It's OK there, but it shouldn't be used more broadly.
Expected behaviour: Don't use in new code! If you come across it, leave it alone! - BURN: Doesn't occur in the code base, and we don't want to see it ever again.
Expected behaviour: Don't use in new code! If spotted anywhere, replace immediately!
We have not strictly defined what counts as a pattern except that they should be things that are found recurringly in our code base. Some examples are:
- Hibernate Typed Queries with JPQL: ADOPT
- Hibernate Criteria Queries: PURGE (replace with Typed Queries)
- Application logic in mapper classes: PURGE
- Composition over inheritance: ADOPT
- Testcontainers for DB: ADOPT
- Mockist tests (in which all dependencies are mocked): PURGE
- Reactive forms (in our Angular frontend): ADOPT
We started by gathering these and other patterns in a table on a wiki page and discussing them one by one. We used the following six columns:
- Name: A short name for the pattern (as shown in the examples above)
- Classification: TRIAL, ADOPT, PURGE, KEEP, or BURN
- Last change: The date when we added or updated the pattern
- Description: Description of the pattern in one sentence
- Rationale: Brief outline of the reasoning behind the classification (1-3 sentences)
- Comments: Additional commentary, e.g., suggested replacements for PURGE patterns or reasons you might not want to use an ADOPT pattern
A full example might look like this:
| Name | Classification | Last change | Description | Rationale | Comments |
|---|---|---|---|---|---|
| Testcontainers for DB | ADOPT | 19.02.2026 | Use testcontainers for integration tests that need a database. | We want our tests to be independent from shared infrastructure (pipeline reliability!) | |
| Composition over inheritance | ADOPT | 23.02.2026 | Reuse code via composition rather than via inheritance. | Composition tends to be more explicit and easier to understand. The code base relies on template methods rather heavily but this has confused devs. | Template methods can still be used in moderation, but we prefer composition. |
Benefits
We started this about 2 weeks ago and have already observed a few benefits while discussing the patterns that came up during our initial collection. In some cases, it simply made clear the reason these patterns existed in the code base and helped us align on proper and improper uses.
In other cases, someone described a pattern that irritated them and suggested to purge it, and someone else went, “Oh yes, I've noticed that too, we should probably get rid of that,” allowing us to boy scout them away with a clear conscience.
Once, we discussed a pattern that was very prevalent in the code base and someone suggested a replacement. What a relief! I had been propagating the pattern because consistency in a code base is valuable (sometimes more valuable than small, isolated improvements) and had been unaware that this had been discussed in the past, and the team had already decided to replace it.
Writing down a brief rationale forced us to articulate why exactly a given pattern was to be adopted or to be purged, providing important context, especially to newer team members.
And the discussion of one or two patterns highlighted deep-seated differences in opinion among team members. We still have to resolve these, and I believe this won't be easy, but having put them on the table at least gives us the chance to have a meaningful discussion instead of quietly continuing to work at cross-purposes.
Why not ADRs?
The Pattern Radar captures our decisions about how to evolve our code base. Does this sound similar to Architecture Decision Records? In a way, it is. Actually, we use ADRs, too, but those are more focused on the high-level structures of our system; you know, the things that are hard to change and for the most part only occur once. If we want to change the service decomposition, for example, we will initiate an ADR, discuss options in the team, decide together, finalize the ADR, and then implement it, probably in an incremental fashion, but still in a quite straightforward manner.
In contrast, I envision the Pattern Radar as a light-weight, evolving catalog of the recurring lower-level structures in our code base. To phase out Hibernate Criteria Queries does not feel ADR-worthy. It is a smaller decision that can be implemented in many small, local steps as we go along. The Pattern Radar gives us a place to discuss and record such decisions, and to review them regularly, without introducing much overhead.
Getting started
Setting up a Pattern Radar is very simple. You don't need more than a place where you can write down a simple list or table and share it with your team. This can be a wiki page, a Google doc, a virtual whiteboard, a Markdown file in your source code repository, or wherever your team likes to record things.
Begin by writing down patterns you have noticed in your code base, and ask your team mates to do the same. Then get together and talk about the problems these patterns solve or create, and agree on a classification. That's all you need in the beginning. You can always add more patterns and details or visualizations later.
To actually let the Pattern Radar come to life, however, you also need to adopt the Boy Scout Rule, i.e., agree to make little changes in line with the Pattern Radar as you go along.
What's next?
Currently, our Pattern Radar consists of about two dozen patterns, half of which we have already classified. The format is a simple table as described above. We have not yet created a visualization (the classical radar picture with rings). I'm not sure we will ever do that. It looks nice for the Tech Radar, but as long as we are going to use the Pattern Radar only internally in our team, I don't think it would add much value.
We have not yet defined quadrants because we don't have that many patterns. We will probably do that once we reach a number of patterns where it makes sense to add a bit more structure. Looking at the patterns we have already collected, I can see how quadrants for “backend code”, “frontend code”, and “testing” might make sense. Maybe we'll call them “sectors” or something if we don't get exactly four.
Although we have defined TRIAL, KEEP, and BURN as classifications, we have not used them yet. I imagine TRIAL becoming useful as time goes on and we decide to try new patterns. And as patterns we classified as PURGE slowly vanish from our code base, we might reclassify them as KEEP for patterns that make sense to keep for a limited scope, or as BURN for patterns we have successfully eliminated. I think it will be useful to keep those on the radar lest someone try to introduce them again, but make them easy to filter out so they don't clutter up our table too much.
We will see how it works out. If anything interesting comes of it, I will write a follow-up post.
If you decide to try it out, I'd like to hear about your experience, so please drop me a line!