Its A Joke

A short business update

It is with much sadness the leadership here at Jordan Terry’s wardrobe have decided to part ways with a number of garments in the team. Some members of the team leaving us today have been with us well over 10 years. 

We wish them the best going forwards and we hope they can find other wardrobes that will value their membership as much as we have. 

We see this as an opportunity to grow the team in new directions. With the onset of autumn we are seeking to partner with new businesses to fill the gaps, negotiations will begin this weekend. We look forward to updating you all in the future.

We have set up a portal for those who have been let go, please visit

Software Engineering

Merging multiple files into one with Kotlin

Kotlin lets us write top level functions, this enables us to write code that isn’t necessarily constrained to the concept of classes. It frees us from “util” classes of static methods (but it doesn’t free us from dumping methods or functions in one place).

Under the hood, Kotlin is constrained to classes, the compiler must generate bytecode that will run in the JVM (multiplatform is another story). To do this, it must put your functions into a class. It will take your file name and create a class from it. Functions in StringExtensions.kt will be placed in a class named StringExtensionsKt.

You may write a set of extensions on the Fragment type that are responsible aiding the retrieval of arguments:

// FragmentArgumentExtensions.kt
fun Fragment.requireStringArgument(name: String): String {
    return arguments?.getString(name, null) ?: // throw

The Kotlin compiler translates it into some bytecode that roughly looks like this: 

public final class FragmentArgumentExtensionsKt {
    public static String requireStringArgument(@NonNull Fragment fragment, String name) {
        // Implementation

You may also have another file containing extensions to help you create a ViewBinding for this Fragment:

// FragmentViewBindingExtensions.kt
fun <T : ViewBinding> Fragment.viewBinding(factory: (View) -> T): T {
    // Implementation

This Kotlin would then be compiled into a class named FragmentViewBindingExtensionsKt.

This all makes sense, we’ve kept our logically different extension functions in separate files. Sometimes we might want to combine our extensions into a single file:

  • If we had Java consumers of our extensions we might want to present the extensions in a single class named FragmentExtensionsKt.
  • Splitting our functions apart internally may not always be the best for a public API.
  • We could be working in an environment that requires we keep our class or method count as low as possibl e.g. two classes create two constructor methods, one class creates one constructor method

Kotlin provides a couple of handy annotations to support this functionality, @JvmMultifileClass and @JvmName.


This annotation tells the compiler what to call the class your file will be mapped into. This is useful if you want your api to look nice for Java users or you want to provide some API compatibility across a Java to Kotlin conversion.


This annotation tells the compiler that this file will be contribuing to a class that other files may also bee contributing to.

When used, they should have the file qualifier and be the first two lines of code in your file.


When added to our two files above, the Kotlin compiler will produce a single class under the hood.

Software Engineering

Experimenting in a legacy code base

I work on what could be called a “legacy code base”. We’ve just crossed the 10 year anniversary of the first commit. Between then and now over 40 developers have contributed. Many features have come and gone, and the platform we develop for has changed beyond recognition and so have our ways of writing code.

Because of these reasons, we have a vibrant, frustrating, yet interesting code base. Over the past three or so years we have systematically refactored and improved it, but we have a lot further to go. We’re in a place where we can start to think about adopting new technologies to modernise our code base.

Using the newest technologies available to us has a lot of benefits the biggest for me is that developers get the satisfaction of using the newest and greatest tools.

But before we can adopt new technologies we have to make sure that developers have a shared understanding of how to use the new technologies and what we want to achieve by adopting it.

The best way to do this is to experiment.

When we experiment with code we learn new ways to write code, and more often than not, we learn why our previous ways writing code aren’t as good as we thought!

Legacy code bases come with a cost. You are surrounded with code full of history and reasons why you just can’t change it, and often there aren’t a lot of tests to make you feel safe! To make matters worse, you can’t just add a new technology for the sake of it – that’s the kind of thing that gives you a legacy code base in the first place. This combination makes experimenting a tricky thing in legacy code bases.

So how do we experiement in a legacy code base? Here are some ideas that I have been trying to adopt over the last six months:

Create disposable or small applications to demo your ideas. You aren’t constrained by your legacy code base and you can move quicker. Don’t forget that you will ultimately be integrating into a larger code base. If you can reuse these applications it is even better, keep them stored in a separate repository so you can use code reviews to explain your experiments to colleagues.

Create a “bleeding edge” application (better name pending), this is an app that can be used to incubate new technology before folding it into your main code base. Think Google Inbox and Google Gmail. If you can roll these changes out to users you get a better idea of what works and what doesn’t.

Design your code base with small modules and strong boundaries defined by abstract types. You can peacefully change your implementations of one module without impacting the rest of your application.

Each of the above options are just different ways of saying that you need to find somewhere outside, or inside, of your code base that allows your to make changes without those reprecussions being felt elsewhere.

When you’ve finished experimenting, you will know how best to propagate your changes safely into the rest of the code base without creating more legacy code.

Software Engineering

To abstract or not to abstract

The longer I’ve written software the more I debate with myself about whether I should be adding an abstraction or not adding an abstraction.

Let us define an abstraction, it could be an interface, a trait, a protocol, or an abstract class. It is a structure that defines how a piece of code should interact with the outside. But not how that interaction is handled.

Abstractions are a powerful tool, but they should be used appropriately. They are powerful at the boundaries of your code but introduce too much indirection when used overzealously.

A good abstraction lets a developer switch out an implementation without any effort. A bad abstraction finds a developer clicking through many files trying to a lot of information in their head.

I have a few rules of thumb that I try to follow:

  1. An abstraction is useful when there is a genuine reason to swap out the implementation.
  1. If you are at a boundary of a key separation of concern use an interface to define that boundary.
  1. If you are designing a library use an abstraction to define a public API that can be added to or sensibly deprecated
  1. If you know your class isn’t going to be swapped out don’t use an abstraction

These are not a definitive set of rules but I find they rein me in from creating an abstraction for everything under the sun!


Kona 2022

About three years ago I signed up for a triathlon. Five months after that I managed to drag my way around that triathlon and kind of enjoy it.. little did I know how much of an impact this would have on my life!

After that first race (super sprint distance) I discovered there was a huge world of triathlon out there. One of the first things I stumbled upon was the Ironman World Championships in Kona.

For those who don’t know the Ironman World Championships are held every October in Kailua-Kona, Hawaii. Some of the best triathletes of the year gather on the island a race each other over the 3.9km swim, 180km bike ride and 42km run.

What a bunch of nutters.

Anyway, I’d like to go there and do that in 2022. You can’t just rock up to a world championship event you have to qualify for it. To qualify for it you have to be one of the best athletes at an Ironman event, like top-5 in your age group good.

I’m nothing in the shape I need to be to hit this goal right now. So I need to really apply myself for two years and hopefully I will find myself swimming with turtles in 2022. I’ve been telling people this for quite a while now but now its time to turn my threats of getting good at sports into some form of reality.

To stand a chance of seeing those turtles, 2021 will have to be a big year for me. I have a few goals I have set myself. I need to train consistently, on 31st December 2021 I want to look back on my Training Peaks and see 52 weeks of perfectly completed sessions. Of course illness and life may get in the way so I want to perfectly complete more than 95% of the training sessions I am given.

If we are lucky enough to race I have two A-races; Ironman 70.3 Marbella which I want to complete in less than 5 hours and Ironman Switzerland which I want to complete in less than 11 hours.

I’m also going to keep a training diary on this blog. Reflection is one of the best ways to improve yourself and a training diary helps you to really understand how you are feeling. Why not share it with the one or two people who might read this website?

Those are some pretty hefty goals, but if I train, eat and sleep well I see absolutely no reason why I shouldn’t hit them.

My new season of training kicks off on the 26th October.. wish me luck.


An FTP test retrospective

Yesterday, I did an FTP test -functional threshold power test – on my bike. This is a test to find out how hard you can cycle for one hour. I failed a test back in May and I’ve been nervous about it all week. 

The test went pretty well and I’ve managed to up my FTP from 266w to 275w, which I am pretty happy with. But, if I’ve learnt anything whilst working in an agile software engineering world it’s that a good retrospective can’t go amiss. So here’s a brief look at what went down.

An FTP test could be done two ways, an all out 20 minute effort or with two all out 8-minute efforts up a climb. I did the latter at a climb on Portsdown Hill just North of Fareham. I would be covering ~3.6km and climbing ~100m in that time. 

Portsdown Hill climb

I thought I’d done a good job with planning, but I didn’t and I’ll explain why now. 

My first mistake was choosing a climb with a descent and a flat piece of road mixed it. Yes, it makes you faster, but it makes producing high power a lot harder and inconsistent compared to the rest of the climb. The relief of a descent also isn’t helpful. During a 100% effort climb, it is cruel to your mind and legs.

Those pesky flats and descents

I also didn’t pay close attention to the roads I would be using. Below you can see I go onto a main road and leave it all in the space of 30 seconds, crossing oncoming traffic.

Also, making a left turn onto a busy and fast main road:

Two reasons why those route choices are a bad idea. First, and always first, is safety. Whether flying up a climb or descending onto a main road, it really isn’t smart to risk. Myself or a driver might not be paying full attention! I was fortunate it was a quiet day and I was able to keep an eye on any traffic that may have been passing and take the roads sensibly.

Another reason is keeping safe or stopping (which you should be doing) has an impact on your 8-minute effort. Take a look at these two graphs showing a dip in power output on the two efforts.

That is roughly 20 seconds where I’m not applying power, freewheeling or braking whilst I turn onto a main road. In a test of my ability to put power down consistently over a time period it really isn’t helpful. My consistent-ish power before this is ruined and then a power spike as I felt I need to catch up with lost power.

All of this slowing down and letting your power drop down into the 100 watts makes returning to putting out 300 watts absolutely horrible, not that it was nice before, but you really don’t want to give your mind or body a taste of relaxation during an effort like that.

Even though I made my evening slightly dangerous and hard I managed to score myself a watts per kg (W/KG) that rates me as good/very good at pushing the pedals on my bike.

That value impacts my training and how hard I’ll be cycling over the next few months. A better route could have added another 5 watts to my FTP, meaning my training over the next few months could have been harder than it will be now, which could mean I would improve faster! That’s all pretty important as I’m trying to get better at triathlon and crack an age group top-25 of an Ironman event next year. 

Not that my result is bad, but this shows that some not so smart route planning during an FTP test can have an impact further down the line!


No human is limited

Photo by Peter Okwara on Unsplash

Eliud Kipchoge a few days before he ran a sub two hour marathon.

Pressure is everywhere in this world, if you’re a human being. I’m trying to stay as calm as possible. It’s about telling people there is one who sets the limits. It’s only in their minds; it’s not something tangible, it’s just happening in their thoughts. I am just trying to remove that click in their minds that no human is limited.

Eliud Kipchoge, Vienna, 2019

Software Engineering

Some thoughts on testing

Photo by Oğuzhan Akdoğan on Unsplash

I associate a number of things with writing test code.

The first is finding peace of mind. In years gone by I have written some dodgy code that has gone to production, I still think about some of this code to this day. I still write dodgy code, but I’m able to stop it from going to production with a superpower I have gained. That superpower is to write tests for my code and mostly stop that code from being released (crashlytics will sometimes disagree). A good set of tests should be enough to give me confidence that what I have written actually works.

Testing is the quickest way to validate your code. As an app developer running a suite of tests from your IDE, in a matter of seconds, is far quicker than navigating to the relevant screen in your app and then doing a sequence of actions to find out your code doesn’t work! 

The code you write to test code is a good indicator of the complexity of the code you have written; large test functions, repeated test code or long lists of dependencies all indicate that your test code is a bit complex. I try to let this guide me when I am writing code. 

I’m not going to tell you I know how to write proper tests, because I don’t and have a long way to go before I start to write good tests. However, over the past few years I’ve started to pick up a few things and form some – hopefully useful – opinions. 

Concise test naming

Don’t be too descriptive, get to the point quickly, and make sure the name matches what is in the body of the test. You and a colleague will need: to review this code or refer back to tests in the future. Make your tests easy to understand now and avoid regret in the future. 

I like to imagine non-technical colleagues might want to read a report on test coverage and then share it to other teams. If you think your tests are easy to understand you are going in the right direction. If you aren’t sure, why not ask someone else?

I think testing code with a small public API helps keep your test names concise. The more API you have to test, the more words you need to describe what you are testing. If you really can’t avoid a large public API, split your tests into a number of different files focusing on a particular method or function of that API to help reduce potential confusion. 

Consistent structure for tests

If you are working on code in the same project you will want to see consistency across in the code that is written. If every test has a familiar structure your or a colleague won’t have to spend time getting up to speed with the general shape of the code, you can just get on with the testing. 

I think this fits nicely alongside the idea that your test code will inform you of the complexity of the code you are testing; if your tests are consistently different or hard to understand you should probably change the code you have written. 

White box or black box?

For the longest time I was an advocate for white box testing; I wanted to know that my tests rigorously tested internals of the code I have written, this is great for my peace of mind. However, changing the tiniest implementation detail would cause a butterfly effect of failing tests through the entire test code base. This is alarming and stressful for whoever is making a change, not a good developer experience! This has led me to becoming a fan of black box testing.

I do think white box testing is helpful, I think it is a great way to help understand difficult to understand code. Writing tests verifying the behaviour of different parts of this code can help you to understand what is happening. I like to think of it as writing notes “this function does this.. And causes this to happen..” the best thing is those notes will tell you if you are right or wrong as soon as you execute them!

Nowadays, I like to just test and output for a given set of inputs. It is a much nicer developer experience and your test code is less intimidating to look at. I also think it has helped to inform how I write code. I try to ensure any function returns something that can easily be used in a test and ensure there are no hidden side effects.

The three points above are things I regularly think of when I write tests. They certainly aren’t a recipe for cooking up the perfect tests but they do help me write better tests bit by bit. 

Software Engineering

Slice don’t Splice

Photo by Juja Han on Unsplash

This weekend I’ve spent some time working on a side project written TypeScript, I’ve never used it before so I’ve spent a lot of time referring to documentation and learning a lot. One thing stood out.

I had an array of data that I wanted to create a sub-list of elements starting from and index, i, to a range. You can do this by calling array.slice:

“Extracts a section of the array and returns the new array”

Typescript, or the underlying Javascript also has a function that adds or removes elements from an array. This is called array.splice:

Adds or removes elements from the array

I suppose I don’t need to go into detail about what caused me to write this blog post, but I have some lessons:

  • Pay close detail to the functions you are writing or selecting from autocomplete.
  • Test every single piece of code you change, even if you think you are making a small change
  • Unit testing isn’t always enough to catch issues, especially when your unit is manipulating data being passed into it

I’d also like to call out the concept of immutability. This would have saved a stupid developer from a stupid mistake.

Lamenting the state of software

Why are we bad at Software Engineering by Jake Voytko

My favourite quote from this article.

We’re decent at building software when the consequences of failure are unimportant.

Jake Voytko