How it Pushes Us to Do Substandard Work
May 26, 2020
Earlier, I pointed to some ways in which computing has not improved in the ways I would have expected 30 years ago, particularly in light of the fact that even a mediocre modern computer is at least 20,000 times faster than a computer in 1990 – and maybe more than a million times faster, depending on what you want to do. The issue is not whether computers can do more "stuff" than they could do 30 years ago; rather, it is the pleasure (or frustration) we get from working with computers, and the power we have to express ideas.
There's no question that certain things have improved. Faster machines mean that we can use higher level languages (e.g., Java instead of assembler). More memory allows us to relax a bit (maybe too much) about memory management. Better IDEs help us to keep a large codebase organized with less frustration. Source code control helps groups of people work together more effectively. Improved compilers give us better error messages and do their job in seconds instead of minutes or hours. The internet allows us to help each other and share solutions. All of these things make life as a programmer more pleasant and productive, but they fall short of what I expected, in 1990, to see today.
The cop-out answer to the question of why computing isn't any better is that it's a hard problem. It's true that certain things, like visible surface determination, are hard problems, and we're unlikely to find new algorithms that are dramatically faster. These types of problems grow non-linearly with the size of the problem so that doubling the dataset raises the computational burden by orders of magnitude. That is, a computer that's 20,000 times faster might be able to work with a dataset that's twice as large, but not one that's ten or a hundred times as large.
The problem of how to improve the experience of programming, along with the outcomes, is not simply technical. It's a human problem, and human problems are the hardest to solve. The sources of the problem are spread out over every level, from how industry-wide decisions are made to the choices made by individual programmers as they type.
It's a common observation that web development (to take one example) has become an ugly nightmare. I'm far from an expert on web development, but the more I see of it, the more it makes me shudder. How did we get into such an awful place? The short answer is that it grew up organically. One analogy is with the way some hospitals have developed over time. They're like a rats' warren. Stuff gets tacked on because the money was available at a particular time, and maybe things were built to work well with whatever generation of X-ray machine or MRI machine was the latest and greatest. Then the hospital needed a larger dining area, then a special wing for heart patients, then the birth rate fell and the lying-in ward gets converted to cancer treatment. Now the parking deck is converted into an area for orthopedic surgery, etc.
Part of the reason things have developed the way they have in the browser world is that many (maybe the overwhelming majority) of the people doing front-end development aren't programmers at all. That's not a good thing or a bad thing, and it's not a criticism, but they're not programmers; they're "content managers." They aren't equipped to understand and work with lower-level aspects of programming, nor do they need to, and nor should they be expected to be. They work in mark-up with the occassional bit of poorly understood JavaScript voodoo, or they may work entirely in something like WordPress. They know enough about the infrastructure to complain when a desired feature is missing, but not enough to understand the ramifications of implementing it.
Browsers are a big business, and these content managers (and the interests they work for) are the customers who can effectively demand grease for their "squeaky wheel." The customer wants a red jacket, so they're given one, no matter how stupid it looks, and never mind that last week he got a green tie, and the week before that he started using rusty C-clamps as cuff-links. As Nathan Myhrvold (former Miscrosoft CTO) said, "Software sucks because users demand it to" (Charles C. Mann, Why Software is so Bad, MIT Technology Review, July 1, 2002).
A different take on who is responsible is given by David Platt in Why Software Sucks. Platt's argument boils down to the observation that programmers are not typical users, and they don't know what users want or understand users' needs. It has been almost 20 years since Myhrvold's remarks, but you can draw your own conclusions from the fact that someone at the top of Microsoft blamed users for bad software instead of the people who produce it.
Even a person who thinks that the state of web development isn't so bad has to admit that page sizes have grown to a crazy extent. Individual pages of websites now comprise several megabytes of data! Some of this may be better content, but a great deal of it is due to the way libraries (and advertising) are layered on top of each other. For example, I wanted to display a single square-root sign on one of the pages of this site, and the MathJax library makes it easy, but MathJax is a 60 megabyte library! That would almost fill the hard drive on my old MacIIcx, and I regularly ran LaTeX on that machine, along with many other programs. Even on a good day, a modern browser consumes more than a gigabyte of RAM. That's more then ten times the amount of storage the Macintosh had in its hard drive. Some of this bloat may be there for good reason, but history and market forces have pushed us into territory where we're in danger of violating Brook's Law on every project, before it even starts.
Another problem we've had is the growth of what a computer is. This trend began with cars, and now extends to doorbells, garage door openers and (yes) toasters. Making a smart toaster may or may not be a silly exercise, but as long as the toaster sits quietly on the counter, it doesn't hurt anything. However, if you hook it up to the internet, then it needs to talk to all the other gadgets. Now, on the way home, you want to be able to program your cell phone to tell the toaster to have a couple of crispy ones ready for your arrival, with a continuously updated ETA based on the possible need to get gas, as reported by the car, and the fact that it may be necessary to stop at the store for peanut butter and milk based on what the refrigerator has to say.
The internet of things problem is relatively new, and it may shake out relatively quickly, but it has been a distraction from issues more pressing than freshly made toast. Up through the early 2000s, we were making progress on portability across platforms, but then toasters invited themselves to the party and we're now faced with finding solutions for what's sometimes called pervasive (or ubiquitous) computing. We expect interoperability of every trivial gadget.
Over the past couple decades, there have been so many shiny things with perceived money-making potential that the culture and economic framework surrounding software development has led to a frontier mindset. We pushed our way west in the hope of cheap land; in the meantime we're living in sod huts and waiting for civilization to arrive. Schools? Libraries? A courthouse? Nope, but we have lots of plague blankets to pass around.
Between the internet proper and the internet of things, our programming resources are spread thin. Somebody is out there right now trying to get mySQL to run on a toaster to track a household's toast preferences – and that person's employer is probably hoping to harvest the data so it can be sold to bread-makers. Economic forces are at work, via consumer preferences, and there's no resisting them, however short-sighted the implications are of the resulting resource allocation. As individuals, all we can do is let the forces beyond our control play out, while trying to improve what we produce.
The book, Bullshit Jobs: A Theory, 2019, by David Graeber, discusses software development only peripherally, but it does express an interesting idea in a brief form. The author interviews "Pablo," a "software developer," who says (p. 219 of the paperback edition)
Pablo: Where two decades ago, companies dismissed open source software and developed core technologies in-house, nowadays companies rely heavily on open source and employ software developers almost entirely to apply duct tape on core technologies they get for free.
In the end, you can see people doing the nongratifying duct-taping work during office hours and then doing gratifying work on core technolgies during the night.
This leads to an interesting vicious circle: given that people choose to work on core technologies for free, no company is investing in those technologies. The underinvestment means that the core technologies are often unfinished, lacking in quality, have lots of rough edges, bugs, etc....
The final paragraph above is not entirely true. Linux, for example, seems pretty nicely finished to me (speaking as a user, not a contributor), and some companies (too few) do pay their employees to work on open-source projects. But there is a kernel of truth to what Pablo says.
Programming is a democratic world – relatively speaking – and that's great, but it can also be a problem. A quick look through github shows that every major problem-area has multiple people and groups trying to be "the ones." In theory, this kind of competition should lead to better solutions; in practice, people focus too much on leaping beyond each other instead of improving what's already been done.
The attraction of "shiny things" was mentioned above in connection with making money, but it's also true that returning to something that was written 15 years ago and improving it isn't seen to be as cool as hacking together something new. That's too bad since something that's been in use for 15 years is likely to be in use for another 15 years.
These kinds of problems with how software development occurs have been observed many times, whether talking about browsers, web development in general, or almost any large software project. That's why there have been fads for Agile, Scrum, Extreme, etc., each with their associated gurus and silly buzzwords. It's not exactly breaking news that it's hard to get a large number of people to work together on a complicated task. In essence, this is a problem that civilizations have been trying to solve since chieftains first started knocking heads.
The problem of how to manage large software projects the "right" way is too big for me to have anything novel and useful to say, and I doubt very much whether anybody has novel and useful things to say on that topic. In any case, I think that's the wrong place to look if the goal is to sustainably improve the development experience, together with the resulting software. The individual programmer, and what he or she produces, is the right place to look.