Innovation in the computer industry — Personal recollections (part 1)
I’m just finishing up the excellent book by Walter Isaacson, who wrote the 2011 biography of Steve Jobs. The book is entitled: “The Innovators How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution.” The basic thesis in this book is that innovation is not really about the lone genius inventing in a garage. Rather it’s the collaboration among a variety of people with varying skill sets, dispositions and intelligence, along with the right environment that sparks innovation.
However, what has enthralled me in reading this tome is that my career in the computer and software engineering business paralleled much of this development of computers, networking and social media. Though I can hardly take any real credit for the advances described in the book, I had a front row seat to much of what Isaacson describes. Also, I heartily agree with his conclusions as to the recipe for successful collaboration and innovation. Sadly, my most innovative and enjoyable experiences in this industry occurred early in my career when collaboration was more valuable than protection of intellectual property.
So, for the balance of this post and a few posts to come, I thought I’d overview some of the parallel experiences that I had. If you don’t care, stop reading now, but I’d still recommend reading the book if you’re interested in how technology was developed over the past 60 or so years.
Awakening to computers – 1967 to 1978
I was a geek in high school, but not a scientific or technical geek. I was a band geek. Music was my life and I was planning to make it my career. My experience with computers was “Don’t fold, spindle or mutilate” on what I was to find out later were punch cards. Our high school dedicated a large room for a mainframe computer that was used to generate schedules, report cards and keep academic records. There was also a home-brew computer in one of the music practice rooms that one toggled switches to input programs and data, which gave the results in a series of lights also on the front panel. Frankly, I didn’t have any interest in this computer at the time, but I remember it fondly because around 1980 I was working ARCO Oil and Gas in Dallas and we had a minicomputer that was not working. I/O was completely non-responsive and to debug the problem, I needed to enter a diagnostic via the switches on the front panel and see the result in the lights. I’ll never forget the kick I got from manipulating the computer at such a primal level. I suspect that placed the seed that eventually grew into operating system design and development.
After receiving my Bachelor of Music degree and several years of performing and teaching, I went to graduate school to study music theory and conducting. However, just before I matriculated, I worked at Radio Shack when the TRS-80 came out. I talked about this experience in Wow, was it only 35 years ago???.
Grad School – 1978-1980
As part of my Masters of Music degree at what is now the University of North Texas, I had to take a couple courses in an unrelated minor. A good friend of mine suggested that I take a couple computer programming courses and BANG! I was hooked. I figured that if I changed my major, I could do something that I enjoyed, but also get paid well to do so (unlike music). This was in 1979.
While at North Texas, we had a IBM 360, model 50, which was a mainframe computer developed in the ’60s. It was huge and old. Lots of flashing lights and switches.There was 1 MB of RAM, which was in an adjacent box the size of a couple commercial refrigerators back-to-back. We used 80-column punch cards to submit batch jobs to the computer, getting a couple runs in per session in the computer room. We had maybe 25 keypunch machines, which were always busy. One competed to get access to a keypunch, then competed again to get jobs run. The documentation for the computer and its attendant software was lined up along one wall, probably 20-30 feet long.
We also had a lab where the department had built a number of rudimentary personal computers (though no one called them that until 1981), running Motorola 6800 microprocessors. We used this lab for assembly language and system programming courses. It was cool to make the system dance to our commands. Much more fun than the mainframe! Finally, we had a timesharing machine that was used for creating typeset papers that used a rudimentary markup language. I have to say, while trying to “program” a paper, I definitely saw the value of what was to become WYSIWYG (“what you see is what you get”, which is how MS Word works).
Oilfield Automation – 1980-1981
After taking several courses, becoming a Teaching Fellow, I left North Texas without my Masters and went to work for ARCO Oil and Gas in Dallas, TX. Though most of my peers who started around the same time ended up being COBOL programmers for the IT side, I asked to be assigned to maintain and later design/implement software managing various processes on the oil fields on the North Slope of Alaska at Prudhoe Bay.
We used MODCOMP mini computers, which were state-of-the-art for industrial control, but the operating software was remarkably primitive, even in 1980. We programmed in FORTRAN. To create or modify a file, one needed to add the file and its data into the OS image and reboot. Though we could use a dumb terminal to interface with the computer, it was at a very low level. The advantage of these computers was not ease-of-use, rather it was their speed and predictability of execution, which is crucial in real-time environments.
Funny story: Each MODCOMP computer was in a small refrigerator sized cabinet and was largely hand assembled using some integrated circuits, as well as a variety of discrete components (e.g., transistors, compactors, etc). There were several circuit boards that contained the CPU, memory and other computer functions as well power supplies. There were thousands of components. ARCO needed to ship a computer to our operations center in Anchorage and normally that would be done via Flying Tigers air freight service. For some reason, Flying Tigers couldn’t take the shipment, so some bright corporate bulb decided to ship it over land, not realizing that the primary road across Canada into Alaska was about 600 miles of gravel not asphalt. When the computer arrived, it was unpacked and opened up only to find all of the components on the floor of the cabinet. They had all shaken loose during shipment.
What happened to the computer? It was shipped back to MODCOMP in Ft. Lauderdale (presumably via air), where it was painstakingly put back together by hand. We ended up using it for development in Dallas because it never was quite right after that incident, sorta like the boxer who has taken a few too many hits to the noggin. Occasionally, 2+2 didn’t equal 4, which is troubling in a industrial control setting. However, it worked well enough for development.
We also had a series of IBM/370 model 3090 mainframe computers, linked together in a primitive network along with a series of remote departmental computers (IBM’s version of the mini-computer). We used monster-sized 3270 terminals to replace keypunches, though running programs was still a batch process, coded and submitted via the terminals.
One time, I needed to submit some actual punch cards. I had a file from the MODCOMP that I needed to get onto my home directory on the mainframe. The file’s data was in a set of punch cards, so I created a JOB card (the first card of the set indicating that the rest of the cards were all part of this particular job). I took the deck to the computer room and placed the deck in a bin to be entered. When I got up to my cubicle, the system was down. 20-30 minutes later, it returned and I looked: No file. So, I went down and resubmitted my deck, came back and the system was down again and again, the file wasn’t there when it came back. So, I went downstairs to the computer room one more time and there was a sign on my deck to see an operator. It turns out, I’d double punched the 2nd column of the JOB card. IBM had a Documented Known Error (or DKE) that a double punch in the second column of a JOB card would bring down the computer. IBM didn’t fix it due to the fact that “no one uses punch cards any longer.” What’s worse, the primitive networking couldn’t handle one computer crashing, so all the mainframes AND the departmental computers crashed each time, requiring a complicated reboot sequence. I was told that it cost ARCO $1M for every minute that the computers were down, or somewhere between $40-60M for my mis-punch.
My time working on oil field automation was fantastic. We were using technology in ways that today would be basic, but at the time were state-of-the-art. So much so, the operators on the North Slope didn’t trust the digital version, preferring the analog dials and switches on the wall. ARCO controlled the processes from computers residing in Anchorage, “networked” by an 800 mile series of microwave links from Anchorage up to the North Slope with a satellite link as back up. I also got to visit the operation on the North Slope, which was one of the most exciting business trips I’ve ever been on. It was a remarkable (and dangerous) operation. Certainly beat writing accounting programs!
Next: What do submarines, air traffic control and operating systems have in common