Programming: Managing Two Point Six Billion Switches

Part the First: The fabric of a microprocessor.

There exists this lovely huge menagerie called “programming”. This thing covers an unspeakably large multitude of concepts, like languages, compilers, interpreters, how a geek understands a program, how a computer understands a program, performance, parallelism and on and on and on. I am probably not aware of all of the things that are a part of “programming”. I sense I may be certifiably nuts to even attempt to cover All Of It, and so what? It’s a worthy goal.

Glinda the Good Witch says she has always found it best to start at the beginning. Excellent advice. Let’s start with the transistors in a microprocessor and relate transistors to the binary machine language of ones and zeroes that microprocessors can execute.

A transistor is simply an on/off switch. Much smaller than your dining room light on/off switch, to be sure, but it does the same thing. Switch on, light on; switch off, light off. Starts off deceptively easy, eh?

The Intel 8-core Core i7 Haswell-E (translates loosely as “8 microprocessors lumped together into one gynormous, powerful microprocessor”) has 2.6 billion transistors. No worries, I can’t count that high, either. Let’s build up to that number. Let’s say you have a house with 26 switches controlling lights. All of the lights are independent – each light is controlled by its very own switch, so each of the 26 lights may be on or off. Given the distinct on or off state of each light, your house with 26 switches gives you 671,088,664 unique combinations of your 26 lights. Each unique combination could be assigned a meaning. For example, all 13 upstairs lights could be on and all 13 downstairs lights could be off. You could say this means “everyone is going to bed right now.” You could extend your possible combinations of lights with 99 very friendly neighbors, each of their houses having 26 independently controlled lights. Now we’re up to 2,600 light switches (26 in each of 100 houses) or far too many combinations of light switches to count. If you added  one million more really friendly neighbors, each of their houses having 26 independently controlled lights, you’re up to 2.6 billion light switches. A rather unmanageable number of unique combinations, to be sure.

Because they’re human, microprocessor designers don’t deal with all 2.6 billion transistors in a lump. They break these up into groups of functional blocks of transistors and assign meanings to the on/off transistor state of each individual functional block. A block might be an arithmetic unit, a memory cache or three, a memory management unit, an instruction processing unit, and so on. It’s the last one, the instruction processing unit, that relates most directly to programming. Remember programming? This is a blog entry about programming.

A geek readable program is made of english-ish instructions. These instructions get translated into a binary machine instruction language of ones and zeroes. The instructions are made up of, let’s say, 64 ones and zeroes. The instruction processing unit transistors get each set of 64 ones and zeroes, which turns each of 64 instruction processing unit transistors on or off. Like your house, each combination of on/off transistors means something. For example “0100010101000101010001010100010101000101010001010100010101000101” might mean “get data from there, add it to data gotten from here and cough up the result, buster”. In reality, those 64 on or off instruction processing unit transistors turn another set of transistors on or off and so on and so forth until the addition actually happens in the arithmetic unit. Details, details.

Ok, that was a lot of stuff. Let’s end this part of programming with:  computer programs somehow get translated into machine binary language. The machine binary language (ones and zeroes turn microprocessor transistors on or off in a combination that (hopefully) makes the microprocessor do what I want it to do.

Left Brain vs. Right Brain Programming?

Serial and parallel programming style.

Until the past decade or so, most software programming was serial. That is, the code was meant to execute one statement at a time. That was fine, serial code performance kept increasing as long as microprocessor clock rates kept increasing.

Microprocessor clock rates stopped increasing a few years ago, and so people turned to parallel programming to extract more performance out of their application programs. One central idea in parallel programming is that independent data and instruction streams may be divided up and executed simultaneously. Much of the underlying computer processors (CPUs, GPUs and FPGAs) support this style of programming. What I find puzzling is that the code development environments do not seem (to me, at least) to fully support a parallel programming style.

I’m speaking from personal experience here. I know I absolutely have to draw out my parallel code on a large piece of paper and only then can I implement pieces of the code in a traditional text-based development environment. Much to my surprise, I’ve seen papers in cognitive neuroscience concluding our human brains use the speech center to work things out in a serial fashion and use the visual center to work things out in a parallel fashion. It sure seems like a useful parallel programming environment would fully support coding using visualization instead of coding using text characters from a computer language. To be fair, this may be confirmation bias on my part. Or perhaps a programming environment using only imagery it a bit too far off in the future. I do not know as I tend to have many more questions than answers. And I’d love to be able to create parallel programs from pictures.

 

The Heart of the Beast

How do computers do so many different things?

Computers run spreadsheets, web browsers, games, take pictures, do email and a host of many other (hopefully) useful things. Computer servers, desktops and mobile devices do the many things they do because of a chip at their heart that executes instructions. These instructions are collectively known as a program. (More on programs in a later post.) This chip is called a microprocessor or a CPU (Central Processing Unit), and it executes several millions of instructions per second. Each instruction is rather simple, so it takes a rather horrifying number of executed instructions to, say, open a web page in your browser. Good thing the microprocessor is really, really fast.

There are many, many ways to describe the guts of a microprocessor. A computational instruction execution engine, an uncountable number of transistors, a set of defined functional units, and a serious power hog and heat generator are a few of the ways of looking at a microprocessor. Let’s look at a microprocessor from the point of view of it being a set of functional units for now.

Let’s warp the idea of a household kitchen for a moment and view it though the lens of it being a set of functional units. You’ve got your refrigerator, freezer, sink, faucet, blender, oven, microwave, dishwasher, cupboards, toaster and so on. Each “functional unit” in your kitchen does one thing really well. A faucet is good at producing water and a toaster is great at toasting bagels. A faucet is maybe not so good at toasting bagels and I sincerely hope your toaster does not produce water. All of these functional unit things together make up your kitchen. A microprocessor is similar in that it has math units for computation, caches for temporary storage of instructions and data, instruction execution units to execute programs, memory management units to keep memory straight (wish my brain had one), and so on. Fewer functional units in total than your kitchen, actually. A bit smaller than your kitchen, too.

The good news is that many years of experience, experimentation and observation have paid off in that we have microprocessors today that can do many, many things reasonably well. It’s more of a generic efficiency apartment kitchen than a Uno’s Pizzeria and Grill kitchen. Nothing wrong with that, I don’t need an Uno’s kitchen when I make dinner tonight.

The bad news is that the functional units in a microprocessor are fixed at what they are. Let’s say I needed another freezer in my generic efficiency apartment kitchen, it’s not so easy to add one. Same thing with a microprocessor. They typically have four floating point math functional units to, well, do math. If my program only needs four at a time for execution, all is well. But let’s say I have a weather prediction program I want to execute and I want to predict the weather tomorrow. A weather prediction program has a lot of math in it, as you may imagine. If I want the program to complete before tomorrow, I’d really like to have, say, a thousand floating point math functional units all running at the same time. It’s not so easy to add more math functional units to a microprocessor.

A microprocessor is a good general purpose instruction execution engine that has a fixed number of functional units to do the work it needs to do. In later posts, I will touch on the nature of programs, alternative ways and means to compute stuff,  programs that translate geek-readable text into microprocessor instructions and whatever else might appear in our wandering.