Kona 2022

About three years ago I signed up for a triathlon. Five months after that I managed to drag my way around that triathlon and kind of enjoy it.. little did I know how much of an impact this would have on my life!

After that first race (super sprint distance) I discovered there was a huge world of triathlon out there. One of the first things I stumbled upon was the Ironman World Championships in Kona.

For those who don’t know the Ironman World Championships are held every October in Kailua-Kona, Hawaii. Some of the best triathletes of the year gather on the island a race each other over the 3.9km swim, 180km bike ride and 42km run.

What a bunch of nutters.

Anyway, I’d like to go there and do that in 2022. You can’t just rock up to a world championship event you have to qualify for it. To qualify for it you have to be one of the best athletes at an Ironman event, like top-5 in your age group good.

I’m nothing in the shape I need to be to hit this goal right now. So I need to really apply myself for two years and hopefully I will find myself swimming with turtles in 2022. I’ve been telling people this for quite a while now but now its time to turn my threats of getting good at sports into some form of reality.

To stand a chance of seeing those turtles, 2021 will have to be a big year for me. I have a few goals I have set myself. I need to train consistently, on 31st December 2021 I want to look back on my Training Peaks and see 52 weeks of perfectly completed sessions. Of course illness and life may get in the way so I want to perfectly complete more than 95% of the training sessions I am given.

If we are lucky enough to race I have two A-races; Ironman 70.3 Marbella which I want to complete in less than 5 hours and Ironman Switzerland which I want to complete in less than 11 hours.

I’m also going to keep a training diary on this blog. Reflection is one of the best ways to improve yourself and a training diary helps you to really understand how you are feeling. Why not share it with the one or two people who might read this website?

Those are some pretty hefty goals, but if I train, eat and sleep well I see absolutely no reason why I shouldn’t hit them.

My new season of training kicks off on the 26th October.. wish me luck.

An FTP test retrospective

Yesterday, I did an FTP test -functional threshold power test – on my bike. This is a test to find out how hard you can cycle for one hour. I failed a test back in May and I’ve been nervous about it all week. 

The test went pretty well and I’ve managed to up my FTP from 266w to 275w, which I am pretty happy with. But, if I’ve learnt anything whilst working in an agile software engineering world it’s that a good retrospective can’t go amiss. So here’s a brief look at what went down.

An FTP test could be done two ways, an all out 20 minute effort or with two all out 8-minute efforts up a climb. I did the latter at a climb on Portsdown Hill just North of Fareham. I would be covering ~3.6km and climbing ~100m in that time. 

Portsdown Hill climb

I thought I’d done a good job with planning, but I didn’t and I’ll explain why now. 

My first mistake was choosing a climb with a descent and a flat piece of road mixed it. Yes, it makes you faster, but it makes producing high power a lot harder and inconsistent compared to the rest of the climb. The relief of a descent also isn’t helpful. During a 100% effort climb, it is cruel to your mind and legs.

Those pesky flats and descents

I also didn’t pay close attention to the roads I would be using. Below you can see I go onto a main road and leave it all in the space of 30 seconds, crossing oncoming traffic.

Also, making a left turn onto a busy and fast main road:

Two reasons why those route choices are a bad idea. First, and always first, is safety. Whether flying up a climb or descending onto a main road, it really isn’t smart to risk. Myself or a driver might not be paying full attention! I was fortunate it was a quiet day and I was able to keep an eye on any traffic that may have been passing and take the roads sensibly.

Another reason is keeping safe or stopping (which you should be doing) has an impact on your 8-minute effort. Take a look at these two graphs showing a dip in power output on the two efforts.

That is roughly 20 seconds where I’m not applying power, freewheeling or braking whilst I turn onto a main road. In a test of my ability to put power down consistently over a time period it really isn’t helpful. My consistent-ish power before this is ruined and then a power spike as I felt I need to catch up with lost power.

All of this slowing down and letting your power drop down into the 100 watts makes returning to putting out 300 watts absolutely horrible, not that it was nice before, but you really don’t want to give your mind or body a taste of relaxation during an effort like that.

Even though I made my evening slightly dangerous and hard I managed to score myself a watts per kg (W/KG) that rates me as good/very good at pushing the pedals on my bike.

That value impacts my training and how hard I’ll be cycling over the next few months. A better route could have added another 5 watts to my FTP, meaning my training over the next few months could have been harder than it will be now, which could mean I would improve faster! That’s all pretty important as I’m trying to get better at triathlon and crack an age group top-25 of an Ironman event next year. 

Not that my result is bad, but this shows that some not so smart route planning during an FTP test can have an impact further down the line!

No human is limited

Photo by Peter Okwara on Unsplash

Eliud Kipchoge a few days before he ran a sub two hour marathon.

Pressure is everywhere in this world, if you’re a human being. I’m trying to stay as calm as possible. It’s about telling people there is one who sets the limits. It’s only in their minds; it’s not something tangible, it’s just happening in their thoughts. I am just trying to remove that click in their minds that no human is limited.

Eliud Kipchoge, Vienna, 2019

Some thoughts on testing

Photo by Oğuzhan Akdoğan on Unsplash

I associate a number of things with writing test code.

The first is finding peace of mind. In years gone by I have written some dodgy code that has gone to production, I still think about some of this code to this day. I still write dodgy code, but I’m able to stop it from going to production with a superpower I have gained. That superpower is to write tests for my code and mostly stop that code from being released (crashlytics will sometimes disagree). A good set of tests should be enough to give me confidence that what I have written actually works.

Testing is the quickest way to validate your code. As an app developer running a suite of tests from your IDE, in a matter of seconds, is far quicker than navigating to the relevant screen in your app and then doing a sequence of actions to find out your code doesn’t work! 

The code you write to test code is a good indicator of the complexity of the code you have written; large test functions, repeated test code or long lists of dependencies all indicate that your test code is a bit complex. I try to let this guide me when I am writing code. 

I’m not going to tell you I know how to write proper tests, because I don’t and have a long way to go before I start to write good tests. However, over the past few years I’ve started to pick up a few things and form some – hopefully useful – opinions. 

Concise test naming

Don’t be too descriptive, get to the point quickly, and make sure the name matches what is in the body of the test. You and a colleague will need: to review this code or refer back to tests in the future. Make your tests easy to understand now and avoid regret in the future. 

I like to imagine non-technical colleagues might want to read a report on test coverage and then share it to other teams. If you think your tests are easy to understand you are going in the right direction. If you aren’t sure, why not ask someone else?

I think testing code with a small public API helps keep your test names concise. The more API you have to test, the more words you need to describe what you are testing. If you really can’t avoid a large public API, split your tests into a number of different files focusing on a particular method or function of that API to help reduce potential confusion. 

Consistent structure for tests

If you are working on code in the same project you will want to see consistency across in the code that is written. If every test has a familiar structure your or a colleague won’t have to spend time getting up to speed with the general shape of the code, you can just get on with the testing. 

I think this fits nicely alongside the idea that your test code will inform you of the complexity of the code you are testing; if your tests are consistently different or hard to understand you should probably change the code you have written. 

White box or black box?

For the longest time I was an advocate for white box testing; I wanted to know that my tests rigorously tested internals of the code I have written, this is great for my peace of mind. However, changing the tiniest implementation detail would cause a butterfly effect of failing tests through the entire test code base. This is alarming and stressful for whoever is making a change, not a good developer experience! This has led me to becoming a fan of black box testing.

I do think white box testing is helpful, I think it is a great way to help understand difficult to understand code. Writing tests verifying the behaviour of different parts of this code can help you to understand what is happening. I like to think of it as writing notes “this function does this.. And causes this to happen..” the best thing is those notes will tell you if you are right or wrong as soon as you execute them!

Nowadays, I like to just test and output for a given set of inputs. It is a much nicer developer experience and your test code is less intimidating to look at. I also think it has helped to inform how I write code. I try to ensure any function returns something that can easily be used in a test and ensure there are no hidden side effects.

The three points above are things I regularly think of when I write tests. They certainly aren’t a recipe for cooking up the perfect tests but they do help me write better tests bit by bit. 

Slice don’t Splice

Photo by Juja Han on Unsplash

This weekend I’ve spent some time working on a side project written TypeScript, I’ve never used it before so I’ve spent a lot of time referring to documentation and learning a lot. One thing stood out.

I had an array of data that I wanted to create a sub-list of elements starting from and index, i, to a range. You can do this by calling array.slice:

“Extracts a section of the array and returns the new array”


Typescript, or the underlying Javascript also has a function that adds or removes elements from an array. This is called array.splice:

Adds or removes elements from the array


I suppose I don’t need to go into detail about what caused me to write this blog post, but I have some lessons:

  • Pay close detail to the functions you are writing or selecting from autocomplete.
  • Test every single piece of code you change, even if you think you are making a small change
  • Unit testing isn’t always enough to catch issues, especially when your unit is manipulating data being passed into it

I’d also like to call out the concept of immutability. This would have saved a stupid developer from a stupid mistake.

Just be decent

Everyone has regrets or moments they wish they could redo. My big redo moment came sometime in the first two months of 2019.

To set the scene: I sat and worked alongside a contractor on our team for two months and we clashed. We had contrarian views on anything we talked about; I didn’t believe that was possible, but it is. We are both software engineers who write code. Our differences would often come to head in my code reviews. I’d disagree with an approach taken and it would often result in long, and very exhausting discussions.

One day it came to head, a disagreement on some code, some code so insignificant I can’t recall what it was. Whether it was the fatigue of long winding conversations to nowhere or just pent up frustration, I snapped. Saying something along the lines of “you aren’t writing good code, we are having to spend time correcting mistakes”.

I said this in an open office environment, surrounded by colleagues in our team. I apologised shortly after; even if the criticism I had leveled were founded in truth, no one deserves to be belittled or spoken to how I did.

That one moment of overflown anger has been on my mind for almost 18 months now. I constantly replayed the few minutes of disagreement in my head and over time my replays have transitioned from words to just how I felt.

I mostly felt: embarrassment and guilt.

Embarrassment, I think of myself as a calm person, I don’t yell or become frustrated at other people and it embarrassed me that I let it happen.

Guilt, it isn’t right to attempt to make someone feel bad for doing their job, a job at which they were trying at isn’t an acceptable way to treat someone.

Everyone should know that treating another person poorly is one of the worst traits a human can exhibit. It isn’t hard to treat everyone you interact with decency. It makes me cringe to see people do the opposite and knowing I did that makes me cringe at the thought of it.

This small incident took up a lot of headspace for a while, but I’ve since used to to inform how I act towards other people.

If I ever find myself in a difficult relationship, I’ll always take a minute before replying; try to understand how we got into this situation and find an amicable way out of it. A sprinkle of humour helps too.

Some people will say that yelling and aggression toward other people is a way of asserting dominance. I disagree, it is embarrassing to witness or do. I think every interaction with another person forms a small part of how that person will remember you going forward. So, just be decent.

Some thoughts on use cases in Kotlin

Originally published here on Medium: https://medium.com/@jordanfterry/some-thoughts-on-use-cases-in-kotlin-6ac8021cbcf1

Recently at the Guardian we’ve started to apply the use case pattern to our business logic. A high level overview of a use case is a class that will encapsulate a particular piece of business logic, or behaviour in your app. You may know of this as an interactor pattern as advocated for by Robert Martin in Clean Architecture. They are easy to interpret and test, which will in turn increase both developer productivity and confidence in the quality of a team’s code.

I like to call our use cases “functional use cases”? Why? Well, we make use of operator overloading to override the invoke function to make execution look like a function. Pretty simple really! Here is an example of how this might look:

class FunctionalUseCase() {
   operator fun invoke() {
       // Do something snazzy here.

And when we invoke it:

val useCase = FunctionalUseCase()
useCase() // Not a function, but calling overloaded invoke function

It isn’t revolutionary but I like writing our use cases like this. Here are some reasons, and some other musings I have on the topic of use cases.

Overloading the invoke function is straightforward and flexible and allows you to make use of a great Kotlin feature. It is, as you might say, more idiomatic!

It is straightforward because you only have to implement the invoke function and you are good to go. Anyone with knowledge of operator overloading should be able to look at our code and know what is happening.

It is flexible as you can add parameters whatever parameters you like to the invoke function to suit the needs of the particular use case. I think this is great as we can provide any class dependencies as a constructor parameters and provide contextual information as function parameters.

One thing I’ve learned from use case “efforts” in the past is creating an opinionated use case such as one that uses RxJava to handle threading could be a mistake. It might look like this:

abstract class SingleUseCase<T>(
private val observeOn: Scheduler,
private val subscribeOn: Scheduler
) {
    fun execute(): Single<T> {
return Single
.fromCallable { doTheUseCase() }
    abstract fun doTheUseCase(): T

This could lead to some sneaky misdirection making it hard for developers to find usages of their implementation of the abstract class, as generically wrapping some behaviour in another type will require an abstract function to fill in that behaviour. A developer won’t be able to find all usages of their implementation as their implementation will always be used in the super class.

This is Rx specific, but you may have to implement multiple classes to handle Observable, Flowable, Completable or Maybe . This just adds a bit of extra complexity to your use cases.

Something I think about regularly is the single responsibility principle (I need more hobbies). It says aclass should only ever have a single reason to change. But in the above implementation, your use case can change if your business rules change or if you decide to stop using RxJava. It breaks SRP in a very subtle way!

Talking of developer experience, there is a particularly annoying gotcha with overloading the invoke function in Kotlin, it relates more to some behaviour in Android Studio/IntelliJ. But lets look at this class:

class AViewModel(private val useCase: UseCase) {
    fun start() {

If you wanted to go to the source of useCase you would be forgiven for thinking you could click on useCase within the start function but you would be wrong. You will actually be taken to the definition of the property. To be taken to the source you’ll have to carefully aim your cursor on the final bracket: useCase(). This is very frustrating if you are trying to quickly navigate through some code!

This quickly turned into me complaining about some old code I have written and applauding some code I’m currently writing. I expect I’ll change my mind on this in the next year or so, but I hope some of these thoughts will be useful to someone!

January 2019 in sports

As the year counter ticked over from 2018 to to 2019 I kicked into motion my plan to get better at this sports thing. This mainly took the shape of doing a lot more exercise and trying to stick to my training plan as closely as I can.

Here’s some numbers from Training Peaks:

Total time – 40 hours 29 minutes
Total distance – 726km

Running time – 9 hours 40 minutes
Running distance – 93.4km

Cycling time – 20 hours 6 minutes
Cycling distance – 620km

Swim time – 4 hours 37 minutes
Swim distance – 11,845m

Strength time – 3 hours 15 minutes

Those are the biggest numbers I’ve ever done, which is pretty exciting. I still have three whole months of preparation for Ironman 70.3 Barcelona so I need to make sure I can keep this up.

Somethings I need to work on:

  • Swim sessions – I was pretty lax in doing them and sticking to the target distances
  • Strength and injury prehab I have the time to expand these sessions
  • Sleep – I’m not getting in enough sleep. I should be aiming for over 8 hours a day
  • Food – I’m not regularly doing the meal prep I should be which means I am eating rubbish every now and then.

The Great South Run 2018

Definitely need new running shorts. 

Between my Sister, Dad and Mum the Terry family have completed The Great South Run a total of six times. Every year a family member has completed it I have never found myself quite ever wanting to run the 10 miles (16km) around Portsmouth. However, given my new found love for endurance sport I thought this year would be as good a year as any to give it a go.

I can tell you already that having done a few sprint triathlons, some long bike rides and few casual training runs does not translate to a very pleasant 10 mile run. 

The first 6km started out pretty well. I started chugging along the Southsea seafront at just under my target pace of 5 min/km and I felt fine. As we worked our way into through Old Portsmouth and into the Portsmouth Naval base I started to feel a bit sore around my chest, perhaps a sign that I was running a little bit too fast? I decided to keep my pace and carry on running and the pain quickly faded – a good sign, or so I thought. 

One part of the run takes you through the city centre and out to a roundabout and back. This is where the wheels quickly came off the wagon! I made it through the first water station and I was as much of a terror to the volunteers as Patrick Lange as I tried to grab bottles of water:

The bottles were a bit of a pain to open as I was running along so I found myself running with bottles for a while as I opened and then tried to drink gracefully. In hindsight I don’t think it is possible to drink gracefully whilst running; in future races I will just accept that water face happens and take in what I can. After the aid station I had to navigate a short obstacle course of jelly babies as someone ahead of me must have demonstrated some poor hand eye co-ordination whilst picking them up.

Anyway, back to the de-wheeling of the wagon. The run leg to the roundabout and back was tough. For some reason the long straight road with a view of people running back the other way didn’t sit well with me. Somehow I survived but I think the mental hit resulted in me losing about 20 seconds a kilometer with half the distance remaining that meant I was very quickly losing time on my target time – shouldn’t have been so vocal about thinking I could hit that time oh well. 

The next leg of the race featured a run back towards Southsea and into some spectator heavy territory. The worst time to be thinking a quick walk wouldn’t be too bad. I managed to perceiver through the negative thoughts and stumbled by my Mum and Dad. Unfortunately I had been looking down at my watch at the time and the video of me stumbling along doesn’t look too great!

It had really started to get warm by this point and I was starting to feel the effects of it and seeing a water station bought a lot of relief to me! Shame it took me about three attempts to grab a bottle of water from someone. A couple of sips of water bought me back to life and for some reason I didn’t drink anymore than that! Got. To. Force. The. Water. Down. Me. 

Soon I was approaching the final turn in the course and heading back along the sea front to the finish line. This bit actually wasn’t too bad. I had a fixed goal of a helicopter hovering over the finish line so I was able to focus on running towards that. Felt a bit sad as the pacer for 1h 25 ran past me and I couldn’t make myself run any faster to keep up with him but I powered through. 

Before I knew it I was at the final 800m and then what felt like 800ms later I was at the 400m sign and then about another 800m I was crossing the finishing line. Which looked like something out of Saving Private Ryan but with people vomiting and the shell shock effect replaced by my hearing going for a bit and me feeling a bit wobbly.

So I made it through the race, definitely wasn’t what I hoped for but I’m glad I got it done anyway. Definitely need to pay more attention to training correctly for an event, completing the training plan and then making sure I fuel correctly during the event. 

The only fuelling on the course was provided by water, jelly babies and a free Sports in Science Gel. The gel was provided too late in the course for me; with about 15/20 minutes left which I don’t think is enough time for the carbs to actually get into the system. However, the electrolyte top up was very much appreciated!