Marc Andreesen famously proclaimed in 2011 that “software is eating the world.” Perhaps it is our own hubris, as humans, that we continue to believe that software won’t eat us, and that we will continue to retain our mastery over computers despite our dependence on them. But even more unsettling is the difficulty of recognizing how computers have influenced us, even as sophisticated algorithms guide physicians in diagnosing patients. In the intimate relationship between programmer and program, programmers do not wield absolute power – code influences its creator too.

When we read new code, we tend to accept it as concrete, using it as a foundation for mental models of the codebase that closely mirror how the code functions. Yet this impairs our ability to impartially judge code because we are reading the solution to a problem before understanding the problem first. We then write new code with this overfitted model until we’ve defined enough of the problem that we notice the mismatch between problem and solution. By then, our mental model is so deeply engrained that refactors become rewrites.

Yet there is a bright side to this story: better code creates better mental models. With better mental models, we write better code. Good code begets more good code.

This shouldn’t be surprising, yet engineers often conclude that the problem with the codebase is best solved by fixing the engineering process rather than the code itself. This leads to more feature specifications, architecture planning, and quality assurance testing, all under the guise of reducing risk and increasing stability. Ironically, it’s easier to tack on engineering processes that slow down engineering than to spend time refactoring the code that is also slowing down development. But humans are naturally more attuned to organizational politics than software architecture, which inevitably means that software also traps with it the conditions it was created in.

Like code, each additional bit of process adds to the institutional imperative, and slowly the engineering process develops its own inertia. The process itself often needs tooling to support it, enshrining the bureaucracy in some opaque algorithm. For example, a tool that reports code coverage allows a manager to set a target of 100% coverage, and future managers and developers may fastidiously follow the metric without ever questioning the value of a target so disconnected from business objectives.

This is the phenomenon of programmer paralysis: a programmer afraid to make changes to the code or the process around code.

Programmers often refer to old code as technical debt as a maintenance burden, as if old code is an old bridge that occasionally needs a fresh coat of paint. But old code is a chassis that we’ve kept adding parts to, and we actively need to design around it. Yet it’s the nature of agile development, with its two-week sprints, to shrink the scope of our thinking to incremental changes. Older engineering organizations take this further and seek to reduce risk by keeping the impact of each change small.

Yet we run the risk of our mental model being wrong, and all our risk reduction has barely moved the needle ever so closer to some local minima.

The most valuable engineering process is the process of innovation, which attempts to shift our mental model to better fit the problem we are trying to tackle. Often this requires stepping back from the code and trying new ideas, unconstrained by the limitations of previous implementations. Therefore, keeping up-to-date with the developer community is highly valued, since seeing many solutions to the same problems makes an engineer better at framing their own problems. And part of innovation is failure – the relentless pursuit of productivity often means that solutions that are more likely to succeed are chosen over ones that are potentially more impactful. But after long day fighting emails and compilers, it all boils down to the fact that software is hard.