Tips on adapting your approach to aid productivity for your remote software engineers – from the 2020 series of Threads discussions.
Discussion host: Robert Turnbull, Agile Coach and Director of Agile Services
Context
Pre-pandemic, there was a trend towards exploring remote engineering practices, but the rapid adoption has seen the communication methods and tooling being shaped around the old office-based practices. It might be time to take a fresh approach.
Whether things ‘go back to normal’ during 2021, with most people back in the office, is something your people will likely want to decide for themselves – choosing their own level of remoteness.
Productivity in remote software engineering
In many ways the currently available tools are a best-attempt at replicating the old ways of working. Consider re-imagining your thinking, workflow and tooling for the new highly asynchronous model – which will mostly likely become a hybrid, partially-remote model, or an all-remote model.
Trust your remote engineers to be productive; they want to do a good job, because they love what they do, are good at doing it, and don’t want to fall behind. They want to deliver, but it’s up to you to tell them what to aim for and to give some autonomy on how you get there. Prioritise for these qualities in your hiring, and nurture this culture.
Remote works best where everyone behaves and communicates as if all are remote. E.g. if you have one person joining the standup remotely, conduct it as if all attendees are remote – rather than risk your remote colleague feeling like an adjunct.
Measuring productivity is hard, and it was hard before the pandemic. There isn’t really one model, metric or measure which actually works – not without getting in the way of things. Even defining what being ‘on track’ means is very subjective.
Consider; what if you were to explore a culture where you let your team self organise? It can be surprisingly productive to set clear high-level expectations of outputs, and let the teams decide their own methods.
Settle on a model for productivity and don’t be afraid to allow a bunch of teams to do things ‘their way’ so long as it is productive.
Robert says;
“A ‘model for productivity’ simply means to establish and be able to articulate the criteria that define success for your business. For example, one participant measured success in hours billed and cash flow. In principle when you evaluate complex things, you can evaluate the output, and audit the process by which the output was achieved. If, having done this, you are reasonably sure that the output is not defective, and are confident in the integrity of the process, you have done a good job. This could be useful if you feel that productivity could be improved. Evaluating productivity means deciding whether your engineering yielded results that met your criteria for success. Evaluating performance equates to evaluating the process. This is challenging when the process is software development – see the argument that software development is not entirely repeatable, and so on, elsewhere in this summary. A least-risk way is to ensure a good process by implementing a system that has worked in similar circumstances elsewhere. Adaptation might be required, and is perfectly acceptable. Then manage the steps in the process, rather than the individuals in it. Then evaluate the output, and iterate.”
Measuring software engineering productivity is hard
Something that you can’t easily put a metric on is how well the team is learning, especially when the team is now remote; a small shortfall in learning today may hurt things later. Use your professional skills to stay in tune with each team member’s progress.
Robert says “unpacking the terminology: distinguish measuring from evaluating, productivity from performance, individual performance from team performance, and measuring from managing.”
“From the discussion, the main challenges with managing remote work were, whether individual performance was up to standard, and whether work in general on track, where ‘on track’ seemed to be an evaluation of team performance. The discussion group seemed to agree that team performance could be inferred from individuals’ performances. If people seemed to be getting on with work and not struggling, then they were productive. If everyone in the team was productive, then work was on track.”
Helpful techniques:
“Lead vs. lag measures: ‘lead’ measures the activity, ‘lag’ measures the outcome. Use it deterministically: state the required outcome and how you will measure it (‘lag measure’) then decide what ‘lead’ measures to monitor to predict periodically whether you will meet the desired outcome. This is all theoretical; how does it apply to software? https://www.franklincovey.com/the-4-disciplines/ “
“DevOps metrics, which indirectly measure Engineering performance as a component of the product delivery operation: cycle time (how long a feature takes between entering the backlog and deployment for users), deploy frequency, mean defect rate (deployed code is defective, and mean time to restore (defective code is fixed in production). There are benchmarks by industry. https://services.google.com/fh/files/misc/state-of-devops-2019.pdf“
Even when things are working extremely efficiently, some mistakes will happen and can aid learning and improvement. Allow for this, but do avoid repeating those mistakes.
Measuring individuals is not nearly so insightful as measuring a whole team. Set high level objectives and measures for the team and, to enable this, support the personal learning and improvement of each individual within it.
Motivation + happiness + training delivers outputs that we can recognise as ‘productivity’.
Robert says;
“There is no objective measure of performance in software development. The reason for this? In software, everything we develop, we develop for the first time (to a large extent). Complexity and risk vary across applications. We attempt to standardise by using techniques like coding conventions, and design patterns, but software development lacks the repeatability of manufacturing. Most of the development process defies automation. In this context, perception of engineers’ performance is subjective, based on observation. It is more difficult to form perceptions of performance when workers are remote. The technique is generally accepted, possibly because it’s better than nothing. Nevertheless, remote or not, its value seems doubtful beyond being comforting to the observer. Most who manage software developers experience the feeling that there must be a better way. Hence the development and gradual acceptance of agile frameworks. These are not easy solutions. They appear simple, but are difficult and quite costly to implement because they require significant organisational change. Nevertheless, they deliver value in the right operating environment.
https://agilemanifesto.org/; https://agilemanifesto.org/principles.html“
Managing a remote/distributed software team
If a remote engineer appears to be losing passion for the work, it’s best to acknowledge this early. Gauging this from remote workers can be blurrier. Put time aside to talk regularly, let them know that you’ve noticed and that you do care about their welfare and enjoyment of work. Address what you can change, then allow them some space to re-engage, and give lots of autonomy about how they do this.
When hiring junior engineers into a technical domain that’s new to them it can be helpful to have them office-based for an initial period, for knowledge sharing and getting up to speed. Be highly attentive to the onboarding process.
When hiring senior and lead engineers, assess for the ability to collaborate, lead and share the learning in a distributed, asynchronous, environment.
In a remote setting it’s easier for introverted colleagues to become self-isolated in terms of soaking up new knowledge and best practice from office-based colleagues. Take steps to support all types of personality and learning styles – such as one-to-ones, retrospectives, group learning sessions, a wiki or Slack channels dedicated to learning.
Walking meetings, following a circular 45 – 60 minute route, can be conducive for one-to-ones; it feels less confrontational than being face-to-face and is a good enabler of two-way feedback. Have discussion points in mind; you can always make notes afterwards. In a remote context, phone earpieces may suffice.
Asynchronous tooling for distributed engineerirng teams
Productivity drops where there is poor tooling in place; this may be from a lack of better options, or through incrementally flawed implementations. It may be time to take a fresh look. Allow for rate limiting factors. Be experimental, try different techniques.
Let your people choose their preferred tools. Allow different teams to run different tools; this is okay, provided there’s no silos.
It’s quite possible that you may want to build your own asynchronous working tools, but be mindful that the vendors and open source projects may be well ahead.
One challenge rising from wholesale adoption of Slack is the volume of chatter across multiple channels. Working asynchronously is difficult if people are awaiting information from another channel. It can be hard to find a steady-state of productivity. Engage your team leaders in ongoing discussions about information share across the teams.
- In trying to determine which communication tools perform best, it is better to draw a view that takes in a longer duration of patterns and trends, rather than micro-analysis
- Slack, being a responsive tool, is great for a client services environment where quick answers to a changeable scenario are of higher value than the cost of the temporary distraction of the person providing the answers
- Avoid cliques forming on social channels which could exclude or isolate newer or less social team members. It’s quite common to find people talking ‘under the table’ on Slack/similar. Remind people that nothing should be said that they wouldn’t be embarrassed by in a court of law
- Imposing rules about ‘acceptable use’ of Slack/similar will likely see those rules being dis-adopted. Instead, build a culture where everyone values effective practices and information sharing
Any question that is asked twice, or which now needs to be known by others, should be recorded in a wiki, recipe or fragment – for easy access by all. Reward this behaviour.
If you suspect someone is falling behind through being a less forthcoming team member, check in more regularly, bring those people into learning groups, finding a task that can be paired/shared with others.
Tuple (for Mac), and the collaborative coding aspects of VScode (for all platforms) can be effective tools for pairing – allowing people to co-locate and hold hands on tasks virtually.
Virtual whiteboarding currently remains tricky; Miro’s Online Whiteboard and Figma are options, as are smart boards used in education. These bring a tactile element for those who take a visual thinking approach – the engineer who instinctively reaches for a pen or marker.
Further reading
Robert suggests Scrum.org’s ‘Evidence Based Management Guide’ as a follow-on from this discussion.
You must log in to post a comment.