The Beauty of Software

And what is done, and what is not done?

Advertisements

“And what is good, Phaedrus, and what is not good? Need we ask anyone to tell us these things?” Robert Pirsig, Zen and the Art of Motorcycle Maintenance

I am going to diverge a little bit from the “simplicity” theme of this blog and talk about how I know when I have completed writing some code. In short, the code appears beautiful to me when it is complete. How then, to describe my experience of beauty to you?

Let me point in the rough direction of what I mean by “beauty”. It is sort of like Quality, a very high degree of excellence in something. It is sort of like the Greek Arete, here meaning “excellence of any kind”. It is sort of like your experience when you see a work of art that touches you, or a landscape vista that momentarily overwhelms you with beauty. You yourself know when you experience something beautiful, and yet somehow it is difficult to describe that beauty.

I can, however, point to aspects of the beauty I see in software code when it is complete. These aspects are not the experience, of course, they are more like distorted shadows cast by the experienced beauty.

Several aspects are related to the anticipated experience of the next person to work with my code. Am I being as kindly as possible to the next developer? Is the code flow laid out in such a way that is will make sense to the next developer? Is the code commented and documented in a meaningful yet terse manner? Are there unit tests with nearly 100% coverage of the code? In short, would I feel revulsion or gratitude if this code landed on my head? There’s a saying in programming that captures a mindset I find useful: “Code as if whoever maintains your code is a violent psychopath who knows where you live.”

Other aspects of software “beauty”. Is there anything missing, the presence of which would make a difference? Is there any “fat” or cruft in the software that is really not necessary? Did I follow the Don’t Repeat Yourself principle? Did I create orthogonal software where algorithms/functions/operations are independent with clean, well-defined contractual boundaries?

Good heavens, as I write this, I see the list of aspects of software beauty could go on nearly forever. Let me stop there and simply say that I know that I am done with a bit of software code when I look over the code and feel a visceral “click” telling me this code is indeed complete.

Programming: Managing Two Point Six Billion Switches

Part the First: The fabric of a microprocessor.

There exists this lovely huge menagerie called “programming”. This thing covers an unspeakably large multitude of concepts, like languages, compilers, interpreters, how a geek understands a program, how a computer understands a program, performance, parallelism and on and on and on. I am probably not aware of all of the things that are a part of “programming”. I sense I may be certifiably nuts to even attempt to cover All Of It, and so what? It’s a worthy goal.

Glinda the Good Witch says she has always found it best to start at the beginning. Excellent advice. Let’s start with the transistors in a microprocessor and relate transistors to the binary machine language of ones and zeroes that microprocessors can execute.

A transistor is simply an on/off switch. Much smaller than your dining room light on/off switch, to be sure, but it does the same thing. Switch on, light on; switch off, light off. Starts off deceptively easy, eh?

The Intel 8-core Core i7 Haswell-E (translates loosely as “8 microprocessors lumped together into one gynormous, powerful microprocessor”) has 2.6 billion transistors. No worries, I can’t count that high, either. Let’s build up to that number. Let’s say you have a house with 26 switches controlling lights. All of the lights are independent – each light is controlled by its very own switch, so each of the 26 lights may be on or off. Given the distinct on or off state of each light, your house with 26 switches gives you 671,088,664 unique combinations of your 26 lights. Each unique combination could be assigned a meaning. For example, all 13 upstairs lights could be on and all 13 downstairs lights could be off. You could say this means “everyone is going to bed right now.” You could extend your possible combinations of lights with 99 very friendly neighbors, each of their houses having 26 independently controlled lights. Now we’re up to 2,600 light switches (26 in each of 100 houses) or far too many combinations of light switches to count. If you added  one million more really friendly neighbors, each of their houses having 26 independently controlled lights, you’re up to 2.6 billion light switches. A rather unmanageable number of unique combinations, to be sure.

Because they’re human, microprocessor designers don’t deal with all 2.6 billion transistors in a lump. They break these up into groups of functional blocks of transistors and assign meanings to the on/off transistor state of each individual functional block. A block might be an arithmetic unit, a memory cache or three, a memory management unit, an instruction processing unit, and so on. It’s the last one, the instruction processing unit, that relates most directly to programming. Remember programming? This is a blog entry about programming.

A geek readable program is made of english-ish instructions. These instructions get translated into a binary machine instruction language of ones and zeroes. The instructions are made up of, let’s say, 64 ones and zeroes. The instruction processing unit transistors get each set of 64 ones and zeroes, which turns each of 64 instruction processing unit transistors on or off. Like your house, each combination of on/off transistors means something. For example “0100010101000101010001010100010101000101010001010100010101000101” might mean “get data from there, add it to data gotten from here and cough up the result, buster”. In reality, those 64 on or off instruction processing unit transistors turn another set of transistors on or off and so on and so forth until the addition actually happens in the arithmetic unit. Details, details.

Ok, that was a lot of stuff. Let’s end this part of programming with:  computer programs somehow get translated into machine binary language. The machine binary language (ones and zeroes turn microprocessor transistors on or off in a combination that (hopefully) makes the microprocessor do what I want it to do.

Left Brain vs. Right Brain Programming?

Serial and parallel programming style.

Until the past decade or so, most software programming was serial. That is, the code was meant to execute one statement at a time. That was fine, serial code performance kept increasing as long as microprocessor clock rates kept increasing.

Microprocessor clock rates stopped increasing a few years ago, and so people turned to parallel programming to extract more performance out of their application programs. One central idea in parallel programming is that independent data and instruction streams may be divided up and executed simultaneously. Much of the underlying computer processors (CPUs, GPUs and FPGAs) support this style of programming. What I find puzzling is that the code development environments do not seem (to me, at least) to fully support a parallel programming style.

I’m speaking from personal experience here. I know I absolutely have to draw out my parallel code on a large piece of paper and only then can I implement pieces of the code in a traditional text-based development environment. Much to my surprise, I’ve seen papers in cognitive neuroscience concluding our human brains use the speech center to work things out in a serial fashion and use the visual center to work things out in a parallel fashion. It sure seems like a useful parallel programming environment would fully support coding using visualization instead of coding using text characters from a computer language. To be fair, this may be confirmation bias on my part. Or perhaps a programming environment using only imagery it a bit too far off in the future. I do not know as I tend to have many more questions than answers. And I’d love to be able to create parallel programs from pictures.

 

Pay No Attention to the Server Behind the Curtain

Computer servers, hidden scaled complexity that works.

Servers? Desktops? They’re both computers, right? Well, yes, in the same sense that an elephant and a mouse are both mammals.

You’re familiar with a desktop or a laptop computer. What you may not know is there are uncounted ranks and ranks of computer servers behind the scenes providing you with, well, services. Servers are the things you reach over the network when you’re browsing to a web site or looking at a map online or sending an email. To even get to these servers, you unknowingly use other servers just to convert something like “techdecrypted.wordpress.com” into a network address that other servers can use to route your request to the right server. There’s this whole infrastructure of servers upon servers upon servers that all manage to work together to get you where you want to go on the internet.

There’s not just one server waiting for you to contact it over the internet, either. For example,  one estimate from 2012 stated Facebook had 180,000 servers. That’s 180,000 computers working together just to do Facebook things for you and everyone else. That estimate is not counting all of the other servers working for you between Facebook and your desktop computer. When I think of it, I find myself astonished at how well our internet ecosystem actually works.

I can speak from experience about one server building that supplied high-performance computing services to a University. They had 5,500 servers on the middle floor of a three story building. The bottom floor was a refrigerated air intake for the entire middle floor, the top floor was the hot air exhaust from the entire middle floor. What really blew me away was that the air refrigeration units and internal building fans had not one, but two backup electrical generators. That seemed excessively paranoid until I was told that if the cold air stopped being supplied to the middle floor with its 5,500 servers for longer than 30 seconds, the microprocessors in all of these servers would be destroyed from their own waste heat. Just turning off all 5,500 servers immediately would not work since all of the existing heat has to go somewhere. Ohhhh. Maybe add a third backup electrical generator?

So, yes, servers on the internet are computers like your desktop computer, but on a scale difficult to fully grasp. And…hidden behind the curtain.

Computer Hardware vs. Software

Maybe a computer is a hardware and software system?

Once upon a time, there were three main types of computer people: the scientists, who used the Fortran programming language; the business people, who used the Cobol programming language; and the computer hardware designers. I was in the latter group.

The computer hardware designers used various extremely low-level assembly languages (and later the C programming language) to make their hardware designs work. Cobol and Fortran are high-level, rather abstract computer languages that require several layers of software infrastructure between them and the bare metal computer. These layers of system software infrastructure were created by the hardware designers at first. My point here is the computer hardware designers had to write the software to make their creations work for the scientists and business people.

Unsurprisingly, I came up through the ranks of the computer profession thinking that hardware and software development were two sides of the same coin. I viewed a computer as a system of hardware and software with a very fuzzy and malleable dividing line between the two.

I woke up one day and realized that somehow computer hardware and software had gotten separated and the two groups often viewed each other with distrust and even disdain. This often resulted in much finger-pointing and blaming when things at the hardware/software boundary did not work.

I tried sharing my view with people that a computer is a system that requires thoughtful design tradeoffs between its hardware and software aspects. This view was not well-received. The hardware/software distinction became even fuzzier (in my mind, at least) when hardware developers started using HDLs (Hardware Description Languages) to create their hardware instead of logic schematics. I’ve never been able to convince a hardware person that designing with an HDL looks a lot like software programming.

I returned to University and earned a degree in Computer Science in 1996, thinking that I would then be validated on both sides of the hardware/software coin. Much to my amusement, this software validation caused the hardware developers to distrust my views.

No worries, it is what it is. Hardware and software developers still ply their trade in separate groups and I still hold firm to the idea that a computer is a system that has two overlapping aspects to it. Although I remain amused that all (most?) of my profession makes an arbitrary distinction between computer hardware and software.

1024 Chickens

Re-purposing GPUs to solve a certain class of compute problems.

Seymour Cray once said “If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?” Seymour was comparing using two supercomputers to using a large cluster of networked microprocessors for computation. Since Seymour was the father of supercomputing, you can guess which he would choose. I’d answer him “it depends.”

So far in this blog, I’ve talked about things with which I have fairly extensive experience. I’m going to go out on limb here and talk about something I’ve never actually programmed (yet): using GPUs as computer processors. Today, these devices are certainly used as processors in the same sense that microprocessors, FPGAs and embedded microcontrollers are used as processors. I did not want to leave GPUs out of the discussion, even when I have only a theoretical understanding of them.

GPU stands for “Graphics Processing Unit.” For many years this device was the thing in your computer that created the images on your computer screen. It started off existence as an application-specific device to assist a microprocessor with displaying graphics. You see, once upon a time, a computer display was text-only. No windows, graphics, images, mouse pointer, nothing. Text. Humble beginnings, no? A microprocessor acting alone could not do its usual general purpose thing and do graphical work at the same time. So GPUs were created to offload the graphics work from the microprocessor, which leads to why your computer desktop has images and icons and windows and whatnot.

When microprocessor performance hit a brick wall a few years ago, some clever people realized the GPU could be used to solve a certain class of processing problems. A single GPU consists of thousands of moderately powerful graphics instruction processors, each executing the same set of program instructions on different sets of data. Look at your computer screen and imagine it is divided into many independent regions of graphics data. All of the independent regions of graphics data have common image processing instructions performed on them, like shading, blending, interpolation and the like. The data is different, the instructions are the same. Some application programs display these characteristics in that they have to do the same thing over an enormous amount of independent chunks of data. Massive parallel execution, as long as each moderately powerful graphics instruction processor in a GPU is executing the same program on different independent sets of data. So maybe 1024 chickens is better than two strong oxen for certain types of problems. No surprising, really, as a hammer is great for certain problems while a socket wrench is great for other types of problems. This is why we have microprocessors, FPGAs, embedded microcontrollers and GPUs used as processors. It always depends on the problem you’re trying to solve.

The Invisible Computer

The secret lives of embedded processors.

Processors called embedded microcontrollers are truly everywhere, and they’re hidden from your view. There are probably many of these just in your car, controlling your anti-lock brakes, radio, fuel injector gas/air mixture, your console display, and so on. I’ve seen these things in sewing machines, microwave ovens, CD/DVD players, smart thermostats, home weather stations, stereos, printers, lawn sprinkler controllers, security systems, mobile phones… you get the idea. I’ve heard estimates that the embedded computer market is several hundred times larger than the desktop and server computer markets combined.

Now, before I go on, I want to say I am not completely convinced this massive proliferation of embedded processors is a good idea. To be fair, a very large part of my career has been spent designing embedded systems and programming these devices. However, I got cranky back in 1984 when I realized my car’s gas pedal was no longer directly mechanically connected to my engine. Instead, my gas pedal moved a variable sensor that in turn suggested to the fuel injector controller that I want to go at a certain speed, and this controller metered the fuel/air mixture to the engine appropriately. So maybe I have secret Luddite leanings. Regardless, I accept these beasties are here to stay, simply because they are so inexpensive and so very useful. Great fun to program, too, but that’s just because I’m an unrepentant geek.

In the The Heart of the Beast, I pointed out the microprocessor in your desktop computer is so useful to you because it can do so many things. On the other hand, an embedded microcontroller is programmed to do one thing really, really well. Your car’s anti-lock brake processor can’t run a spreadsheet program. It’s a good thing, too, as safety dictates you want that little processor to focus solely on monitoring your brakes. That’s why these devices are invisible to you, they do one thing well and your sewing machine, car radio, smart thermostat and so on just work (usually) like you expect them to work. You know how you press the buttons on your car radio or smart thermostat and things happen? You’re actually programming these embedded devices. It’s just that the set of programming commands is extremely limited by the specific function of each device. There is no point in having a radio tuning button on your sewing machine.

Many (most?) of these embedded microcontrollers are inexpensive for three reasons. First, they are designed by the device manufacturer to be low cost; second, they have to do only one thing very well; and third, they are sold in huge quantities. A desktop computer is general purpose, flexible and has much lower sales volume, so  microprocessors cost a lot more. I did a faucet head lawn sprinkler controller once for a company where the microcontroller device cost the company about US$0.23 each in quantities of millions. Amazing, really.

As an aside, there’s an interesting middle ground that has appeared somewhat recently: smart mobile phones. Old mobile phones had an embedded processor or three in them and these executed all the functions required to do phone things. Smart mobile phones have the usual phone thing as well as having the ability to load other non-traditional phone functions. Mine even has a spreadsheet, which I still find a bit startling. Are smart mobile phones an embedded processor or a general purpose microprocessor? I’m going to answer “yes” and walk away from that question.

So. Embedded microcontrollers are cheap, function-oriented and everywhere in your world. Oh, and invisible.