I've been spending a lot of time in the past couple weeks on refactoring large sections of code. To some people this might be boring, or tedious, but to me it's possibly the best part of being a programmer, or at least tied with the other best part which is writing new code. I love poking around in conceptual musty back rooms, throwing up my hands in disgust, cursing a little, and then tearing it all apart and putting it back together again.
In case anyone reading this isn't familiar with the term, it's basically the same process as what any other discipline that produces written text would call editing -- going back over what's been written and adjusting it for accuracy and style. Except unlike other written output, program code is functional. It's a set of instructions being interpreted by a machine that is completely stupid, and completely unwilling to treat anything you write any way other than literally. This has some advantages. A naive system will, at least, always come to the same conclusion every time it reads the instructions, and every machine will interpret your words in the same way. But it also means that you have to be explicit about everything. Every detail has to be included, every time you need it.
I'm willing to bet that the majority of people who write for human consumption aren't really aware of how lucky they are to have an audience that isn't completely literal. A human reading a story or a set of instructions will bring an amazing amount of conceptual understanding to their side of the discussion, and interpret confusing or contradictory things properly using their own judgment. If you take a story, change a character from male to female, and miss one of the pronouns, any human reader might be a little confused, but will quickly realize what's going on and continue reading. Maybe a little irritated, but without a severely damaged understanding of what happened. A computer presented with the same situation will crash, and refuse to read past the confusing word.
If that were the entire process of programming, I would hate it. And it's still possible to write code in that style, turning out one huge long statement describing every possible part of the instructions in order -- but it's incredibly difficult to manage. Not because the instructions are hard, but because humans aren't wired that way. We operate in exactly the opposite way from computers, jumping around from section to section in a process, assuming things all over the place, and generally being messy, flexible, and good at abstraction. Which is great for us, but terrible for writing high quality instructions for machines that can't do any of those things.
The history of computer programming advances can be neatly summarized as a series of attempts to make computers think more like humans do. Not making them sentient, but making them able to follow abstractions in any kind of reasonable form. For example, entire programming languages have been born and died over a concept that breaks down to, "This thing is like this other thing. It acts differently when it does this thing, everything else is the same." It took decades to get that, and in some ways it's still a clumsy metaphor. But with it, computers are finally capable of understanding instructions in the abstract.
That's the part that I love. Because with proper abstraction, you can express anything. It doesn't matter that you have to be pedantic and describe every tiny little detail, because those details can be wrapped up inside a nice, neat abstraction, and then ignored. There are so many layers of abstraction now that it's very rare to get anywhere close to the bottom, unless you work in very specific fields (usually related to building programming languages). That's the part that's creative, too. Because with the individual instructions, you can prove that they're correct. You can ask the computer as you're writing it, "Do you understand all of this?" You can give it samples of expected inputs and outputs and verify that not only does it understand how to follow your instructions, but that the instructions you've given do what you want them to.
Not so when creating abstractions. You can verify that the instructions make sense, and do what you want, but the most useful part of the abstraction is that it lets you forget about it entirely once you're done creating it. That's something that can't be verified systematically. It's much more art than science, because it's all about how easy it is for humans to express ideas on top of the abstraction. The computer could care less between a nearly infinite number of possible abstractions, but only a couple make any sense for the humans who have to create new instructions.
I've been thinking about this not only because I've been applying it at work, but because I've been interviewing more, and running into more of the kind of frustrating things I complained about a couple posts ago. Because when I say that humans are better at abstract thinking, I now have to qualify it. SOME humans are good at abstract thinking. Based on interviews I've done, stories in the news, and other interactions too, it seems that the average human is pretty bad at it.
I'm also starting to wonder if this is part of the possible split I've mentioned before. Some of the divide must be education-based, but it also seems likely that there are certain personalities that just aren't suited for abstract thinking. Based on computer programming as a driving component of more and more new industry, abstract thinking is in no way optional if you want to work in automation. So are the people who don't think in the abstract destined to drift lower and lower on the tech curve? I have to think yes, although the pace of this effect seems uncertain. It probably depends on how much of it is biological and how much is environmental.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment