Remote working for Developers...or not?
Just before the covid lockdown last year, we were preparing our first serious experiment into remote working for our small development team. At the time, we had 2 seniors, 2 juniors and me but as a UK-based company we suffered in many ways from the lack of suitable developers in the job market and were looking at recruiting remotely.
It was a risk, of course, since none of us had done it before. We started with a simple test where one of our senior devs worked from home 3 days per week and then came into the office for the other two. This worked OK and we realised that it certainly could work to get people working remotely full-time.
Then lockdown happened and we were forced into it anyway but fortunately the pre-work of setting up RDP and VPN and also the experience of working remotely for that short period made the transition relatively painless, including for the rest of the company who were not planning to work remotely but who could now use our new remote working systems!
I highly recommend the book "Remote: Office not required" by Jason Fried and David Heinemeier Hansson, the founders of Basecamp. They were in the rather unique situation where their two original founders of Basecamp were already remote, one in the US and the other in Europe. They had to work out how to work across timezones, and in this case, an extreme timezone gap but then decided to build the company in this way, learning what is important to make remote working work well.
There are of course, many reasons why employees like remote working, and in the case of the book, completely flexible hours, such as family commitments, leisure time (e.g. surfing) that might work better at some times of day than others and also the very genuine advantage of being able to move house and keep the same job.
The definite main stick-out point from the book (other than trying to keep people within +/- 2 hours of timezone to avoid extreme challenges), is that you must be able to measure productivity if you are to ensure that people aren't doing what many managers are concerned about, which is not very much, and to ensure they are adding appropriate value to the company for their salary.
One of the ironies of office-based working is that we assume if people are tapping away at their keyboards for 8+ hours per day, we assume they are working hard and providing value. If we think about this, however, it is absurd to equate attendance and even effort with output or value. In other words, whether we work from the office or remotely, management need to have some fair and objective ways of measuring performance - not to 2 decimal points, but certainly in a ballpark.
This is where it gets tricky and why some of the larger tech companies have talked a lot about remote working yet proving by their actions that they still want people back in the office to provide that false sense of security/productivity. The truth is that measuring output is hard, at least partially subjective and it doesn't scale.
How can you measure how much value that a developer adds to the team? In extremes, you can certainly spot the very effective from the non-effective. You can spot very poor quality from very high quality but most people are in the middle and do not stand out.
Counting tasks is not great. Juniors are likely to work on the easier tasks that don't involve risk. Seniors might really struggle, not because they are not working and not because they are not skilled but because some problems are hard and these might be infrequent enough to make it hard to average across individuals.
I have not found any specific way of achieving this but there are some principles that can help setup a system that is fair and ideally, might be automatable.
- Compare people with their peers, not with an arbitrary industry standard
- Measure performance over time
- Have a think about what you need to measure and how you can do this
- Use the team to be accountable to each other
Firstly, every organisation is different and has a different speed/quality tradeoff. The amount of functionality you would expect to achieve in a financial application compared to something more noddy is very different. Industry averages will not help here so we need to compare our senior devs with other senior devs, mid-level with mid-level and juniors with juniors. We are not looking for complete parity but we should be mindful of salaries and salary reviews when we measure people. It wouldn't be fair for the most productive dev to be paid less than the least productive one.
Secondly, it is probably obvious that things change over time. New starters will have less familiarity with the systems and processes, some weeks you get given a nasty task that is really hard and might be pushing the boundaries of what is possible within your framework and you are not inclined to add 100s of lines of code for something that sounds like it should be easier. Over time, we would expect new people's work rate to increase and occasional bumps to be ironed out. I do actually track tasks counts in sprints as one metric to help me measure performance, this at least helps me spot outliers like people working too fast (quality issues) or not fast enough (cannot focus, lacks ability etc).
The third part is the hard part. What can you actually measure to know how effective people are? I think there are 3 things we want in the "ideal" developer. a) People write high quality code b) quickly c) without creating bugs.
High quality code is quite subjective but there are certain things that mark good code from average or bad. High quality unit tests (where useful); good use of patterns; lack of code smells and general housekeeping (tidy code). This is hard to measure consistently but one place is in a code review. You can count average comments/file (more files would expect more comments), ideally removing question comments that might be like "have you considered X", "Yes". This would take several months to take allowance of the different ability of reviewers, different motiviations to pick up everything rather than just important things and the fact that some of our code might be better than others. We might ideally want to distinguish a simple error that anyone could make from a howler that should have been spotted but this would generally require the developers to count these for you and fill in a form to record it. You can then tie these stats specifically to goal setting and even potentially the ability to slow people down in order to improve their scores - there needs to be some "cost" to people producing poor quality code as a motivation to do it right.
Doing things quickly is the sign of a proficient developer. I remember a Contractor at a previous company who was paid (a lot) to write an encryption library to centralise the methods we were using for hashing and encryption into a single maintainable place. He took a month and it wasn't finished. After he left, I rewrote it from scratch in 2 weeks and finished it. Although crude, if you set the principle that no tasks should take longer than 2 hours, then counting tasks is a way to count speed.
Speed is nice but quality is essential. In the modern web, there are so many ways to get attacked for different reasons, and there is a large risk that the organisation takes when deploying any application. What happens if an attacker works out how to delete everything? Even with a backup, you lose credibility, you have some data loss while restoring the backup not to mention that you still have to track and fix the bug that allowed this to happen. As agility increases, the change of getting bugs in production is increased and it is essential therefore that the developers a) understand their ability to cause critical bugs b) know how to know if a bug has occurred c) have the ability to jump on it and fix it quickly and d) know that there is a consequence to what they have done both to the company and potentially to their job!
If bugs are found in production, they need to be traced back to the cause and a root-cause made to ask whether it could or should have been avoided. If the code is horrible and the bug not-surprising, then why wasn't the code refactored before being used? If a developer hasn't followed the correct process and caused a bug that shouldn't have happened, do they need mandatory training or a conversation about whether this is the right job for them? Most of us like to play happy families but any manager will have to deal with people who will not or cannot do what is required of them in a role and you definitely want to find this out sooner rather than later.
Lastly, if you don't want to be seen as a Manager who incites fear in the team, you need to use the wider team to monitor and to help each other. If the sprint is a team effort, then a single person not pulling their weight is probably visible to other members of the team who will want to speak out since the individual affects the perception of the entire team. You are not looking for blame culture but if someone is not producing anything, they need to be called out. If they are not producing high quality, it needs calling out. If the team does this as well as the Manager, then you create a team culture which is easier to understand for new candidates and easier to see if they are not a good fit. The team can then provide input on how to improve, training required etc.
So, in conclusion, remote working requires performance monitoring but so does office-based work. Secondly, measuring performance is not easy but without it, you are just sticking a finger in the air, which means people who want to game it, will game it.