Blake Snow

writer-for-hire, content guy, bestselling author

As seen on CNN, NBC, ABC, Fox, Wired, Yahoo!, BusinessWeek, Wall Street Journal
It looks like you're new. Click here to learn more.

The First Personal Computer Wasn’t a Phone

The following excerpt comes from my newest book, Measuring History: How One Unsung Company Quietly Changed the World.

If there’s one theme that runs deep within the founding DNA of National Instruments, it is this: “Nothing beats dumb luck.”

Long-time CEO Jim Truchard is credited with imparting the belief during his four decades at the helm, and the principle certainly caught on, especially with early-generation employees and executives. “It’s basically a pleasing and modest way of saying we’ll succeed with hard work and luck, but to be great, we also need good timing,” explains Steve Rogers, who was hired in 1984 and serves as one of LabVIEW’s chief architects. “We were lucky to grow when we did.”

Almost everyone I spoke to internally for this book repeated the refrain: “Nothing beats dumb luck. This would have never worked had computers not become affordable and accessible to everyone. We were in the right place at the right time.”

What was that place and time? For those who grew up with smartphones and touchscreens in their hands, it’s important to understand that the “personal computer” (or PC) revolution of the 1980s, 1990s, and early 2000s was a very different place and time. Society had no idea that bulky desktop and laptop computers would eventually merge with cellphones to become pocket-sized supercomputers, so at the time they called them “personal computers.”

They were personal because unlike the wall-sized mainframes that groups of researchers and scientists used to punch numbers into—or the still too large for desktop “minicomputers” that Truchard, Kodosky, and Nowlin made their first interface for—these smaller computers could not only be used by everyday individuals, the latter could actually afford them.

Like many people at the time, however, Kodosky was skeptical about just how far the PC would go. “We all intuitively felt that most any computer could be a viable instrument controller and that we should support them all,” he says. “But the PC was a perfect example of the Innovator’s Dilemma. It was small and underpowered, but on a much steeper growth curve because of its lower cost and greater accessibility.”

Truchard understood this “Innovator’s Dilemma” better than most, well before Harvard professor and bestselling author Clayton Christensen first coined the term. In other words, business must regularly decide between catering to customers’ current needs by doing what they’ve always done (in this case, complex but expensive boxed instruments used for testing) or adopting new innovations and technologies that might disrupt their own business to answer future needs (faster, more accurate, and more affordable testing using PCs). As Henry Ford famously put it, “If I had asked people what they wanted, they would have said faster horses.”

On top of that, the PC was the first computer to have “shrink wrapped” or otherwise ready-to-run software that didn’t have to be customized or updated for each use. In other words, you just inserted a floppy or hard disc with the desired software program on it, and the program or “application” would work for anyone who had a compatible computer and the disc itself. Viva la digital!

But the computer as we know it today is actually rooted in innovations from the 1830s. In that decade, an Englishman named Charles Babbage began designing mechanical “calculating” or “analytical” engines to help solve complex mathematical equations. Now known as the “father of computing,” Babbage designed a giant metal contraption that looked more like a sideways music box than the electronic devices and screens we use today. Nevertheless, his work became the basis for how we think about computers today, and his name was even used to sell software in popular American malls during the 1980s.

In the 1930s, two Americans named John Atanasoff and Clifford Berry built the first of several first-generation electronic computers for the US military. These massive machines weighed 30 tons, which is equal in weight to about eight full-sized cars today. But despite its gargantuan size and then-impressive accomplishments, the so-called “Colossus” could do little more than basic arithmetic and was less powerful than a common calculator is today. Furthermore, it had no operating system, could only perform a single task, and reportedly would dim the lights in nearby sections of Philadelphia every time it was turned on.

“Second generation” computers were first developed after World War II by the US military and replaced lightbulb-looking vacuum tubes with transistors, which are more reliable semiconductors. In 1951, the first commercial computer was made available, the Universal Automatic Computer, followed two years later by a popular computer maker named International Business Machines (IBM). During this “wild west” period of computing, there were more than 100 different programming languages, and computers gained memory, operating systems, and storage media that relied on tapes and discs.

Then in the early 1960s, a miracle happened. Jack Kilby and Robert Noyce (aka “the mayor of Silicon Valley”) independently created the integrated circuit, which is more commonly called a “computer chip” or “microchip.” These are the rectangle, bug-looking pieces of plastic and silicon that replaced transistors and are used in virtually every piece of electronic equipment manufactured today, including computers, mobile phones, home appliances, cars—everything. Their invention, later popularized by Intel, made computers both cheap and fast, and continue to make the technology smaller and better with every passing year, if not month. Like Truchard and Kodosky, both Kilby and Noyce were inducted into the National Inventors Hall of Fame.

The integrated circuit also ushered in the “third generation” of computing, the one we still find ourselves in today. Although computers remained abnormally large and mostly out of reach throughout the 1960s and 1970s, things began to change around the 1980s. Together Apple, IBM, and others—along with Microsoft operating systems—released important “microcomputers” or “personal computers” that were capable of running many different programs for a variety of different interests. These machines were affordable, connected to screens or “monitors,” and could run a lot of promising software. But their text-based operating systems were off-putting to all but the geekiest, most curious, and often younger users.

In 1984, things would get much easier for those wanting to tap into the power of affordable personal computing. In January of that same year, Apple released the Macintosh, the first mainstream computer to adopt a somewhat controversial graphical user interface (or GUI) and a “mouse” pointing device, instead of a keyboard-only command-line interface for navigating, updating, and changing computer files and programs. First pioneered by Xerox, this is the visual type of computing we still use today, whether on a desktop computer, laptop, or smartphone.

While early PCs running Microsoft DOS (Disc Operating System) would proliferate for several more years, eventually all computers would embrace full-color GUIs and “mice” after the release of Microsoft Windows in 1990. Throughout this transition, many other revolutionary innovations appeared, including local area networks (LAN), which allowed individual PCs within the same or nearby buildings to talk to each other, and wide area networking (WAN) such as “the internet” or “world wide web,” which united computers and people from all around the world.

In addition to increasingly smaller “laptop” computers, consolidated programming languages, and freely available software such as UNIX and Linux, all of these innovations culminated many years later into the touch-based, smartphone-based, app-based, and speech-based computing we know and love today—the one that largely took root at the start of the 2010s.

Of course, when most people want to get “real work done,” they still reach for laptops, full-size keyboards, “mice,” or desktop computers attached to full-size monitors for maximum screen space. The same is true for engineers, scientists, and researchers as it is for artists, business users, and consumers.

But as modern computing was taking shape over the last half century, competing and influential ideologies emerged in the 1980s and 1990s that are still being developed and sometimes argued over today. In one corner, you have the “closed” Apples of the world, who believe that if you let them control the majority of the computing experience—all the way from hardware to software and all the peripherals in between—you will have a better user experience. While manufacturers like Apple aren’t usually entirely closed and do open some of their systems to many third-party applications, peripherals, and hardware makers, they are notably more closed than IBM was during the PC’s formative years in the 1980s.

Which brings us to the other corner of computing ideology, which is one that is largely defined as being “open” to the largest number of manufacturers, hardware makers, software companies, and peripheral makers. It is the PC approach that we so often talk about and the one that “open market” business is so much more in favor of adopting. Again, the boundary lines have increasingly blurred, but there was a time not long ago when “PC compatible” was a very important selling point.

In 1986, Intel released a “PC clone” chip called the 386 (for short) that could be used by a wide range of computers, including those made by IBM and Compaq, as well as Austin’s very own Dell (which was first called PC Limited). Thanks to these sub $1,000 “PC clone” machines, the “open” PC ecosystem would go on to dominate both business and consumer computing, whereas Apple would remain an influential and profitable—albeit niche—manufacturer of both computers and (later) smartphones and watches.

All in all, “open” PC systems are more expandable, affordable, interchangeable, and inviting to tinkerers than the closed, more limited ecosystem that Apple propagates. They’re also more susceptible to security concerns, although that’s partly due to popularity and a greater number of hackers with access to more affordable PCs than expensive Apple computers.

Early on, National Instruments internally struggled with this dichotomy. On the one hand, like any growing company, they wanted to fish in the largest pond with the greatest number of catches. This approach had fueled their success with the PC industry and their IBM GPIB boards, their first big commercial hit.

But on the other hand, PC command-line prompts were admittedly more difficult to use when compared to early Macintosh GUIs that were more inviting to someone who didn’t major in computer science. So for National Instruments, the introduction of the promising and powerful Macintosh presented both an innovation opportunity and a business conundrum.

The GUI future had arrived, but the people cutting the checks to use a Mac for science and engineering might be few and far between, if not several years out. As Kodosky said before, it was a dilemma. The PC was small and underpowered but immensely popular. But the Macintosh presented a powerful opening, albeit to a smaller market (when compared to the PC) that the similarly small National Instruments wasn’t quite sure of. Whereas now the stakes are clearer, back then they were still being developed. There was a lot more uncertainty, if not unbridled enthusiasm, for which way the world might turn. It was exciting for everyone, to be sure, but it was the first time computing was becoming mainstream for not just consumers, but highly specialized businesses.

In that sense, the personal computing revolution had fully arrived, regardless of whether you flew an Apple, IBM, Microsoft, or (later) Android flag. People were using computers on a daily basis, not only to complete their work projects and school assignments, but to live their lives and connect with a greater number of people.

Kodosky admits that the company’s timing was undoubtedly “serendipitous.” But success was also dependent on a good idea and bending over backwards (as Kodosky had done years before over Christmas break) to support paying customers who depend on the company’s test and measurement products. “Thanks to an idea that resonated, hard work, and dumb luck in the form of the PC revolution, we were able to succeed in ways I never thought possible,” Kodosky says.

At the same time, Truchard was looking for the next big thing, especially as growth for GPIB on IBM’s PC slowed. “It took us a whole year to grow our other business to compensate for the drop in IBM orders,” he remembers. “I wasn’t the best engineer; my strongest role was an undying curiosity to find out what was going on in the market and defining a new way of solving technical problems for engineers.”

To do this, Truchard took to not only observing the external world, but observing how his own internal engineers worked. “I call it management by walking around,” he says, often showing up at an employee’s cubicle at random and asking how their work was going. He would continue this management style for decades—well before it was cool to be an approachable CEO and even with 2,700 employees at headquarters.

According to most, Truchard was never a combative or argumentative CEO. He wasn’t looking for someone to tell him “yes” all of the time or for everyone beneath him to accept whatever came down from the top. He simply wanted someone to propose a better idea or a solution to a vexing problem.

In 1983, that someone would be Kodosky, the company’s chief software engineer. That same year, Truchard took Kodosky to the world’s largest PC conference in San Francisco. “I wanted to show him how much action was on the PC,” Truchard says. After attending the conference, Kodosky knew he wanted to create a new program for engineers, but he wasn’t quite sure what it would look like. “I started thinking about it that year and jotting down ideas in a notebook,” he says, “But it wasn’t until the following year that I had the full-fledged idea.”

That idea, if not eureka moment, was entirely inspired by the release of the Macintosh computer. But it required a little help from a friend. “I’m a bit of a contradiction,” Kodosky says. “I sometimes adopt the latest technology quite early, such as graphical user interfaces, but mostly I’m a late adopter.” That same year, Kodosky and his wife visited his sister. “My brother-in-law greeted me at the door and said, ‘Come see what I bought.’” He then sat Kodosky down in front of this little tan box with a black-and-white monitor. He handed him a “mouse” and showed him how to “click” on the menu bar and open MacPaint.

“He handed me the manual, but I never opened it,” Kodosky remembers. “I quickly discovered how to use the machine and its programs simply by clicking around and trying things. This was incredible and unprecedented—with only a mouse you could visually interact and manipulate things on screen for the first time ever.”

Three hours later, Kodosky said goodbye and barely said hello to his sister. “I was smitten!” he exclaims. “Although I was a UNIX brat and PC critic at the time, I immediately went out and bought a Macintosh. I knew this was the future of computing and immediately considered it for the solution we were seeking.”

That was certainly the case, but for Truchard, the Macintosh wasn’t the future of business, at least not immediately. After all, most of the money was being spent on IBM PCs and later PC clones that would dominate over 90 percent of the market. “I had hoped Jeff could develop something for the PC,” Truchard says, “But Jeff was adamant that we release it for Mac first.” That was because the PC didn’t have the necessary graphics capability at the time, whereas Macintosh did. When that became obvious, both men agreed that releasing for Macintosh would be the best place to start.

This was problematic not just because the Macintosh represented a smaller piece of pie, however, but also because it was then viewed as “a toy” that shouldn’t be taken seriously. After all, would engineers be interested in buying said “toy” to get their work done, even if it was easier to manipulate?

Both Truchard and Kodosky bet yes. And that was a big bet for a little company to make. They would start their next big thing on Mac, while rightfully forecasting that the PC would become their eventual meal ticket (which it later would in a big, big way). On top of that, they were making a big bet on software, which wasn’t as fashionable then as it is now. While there were certainly some software makers back then, many companies had their investments tied up in hardware.

If it weren’t for the PC, though—and by that I mean the broad definition that includes all personal computers, be they made by IBM, Apple, Dell, or others—there probably wouldn’t have been a National Instruments. Certainly not the influential and beloved engineering company it is known as today.

Of course, without the PC, we wouldn’t have the five largest tech companies that account for a whopping 18 percent of the total stock market today either, namely Amazon, Google, Apple, Microsoft, and Facebook. The world would be a very different place, arguably a much more closed and insular one than it is today. There certainly wouldn’t have been modern virtual instruments that are used to make most of the products we use every day better. That alternative might not be a dystopian world without the PC revolution, but it certainly would be a much more stunted and less exciting one.

Might quantum computing someday displace the still-ongoing PC revolution to become the next and fourth generation of computing? Will qubits replace the binary ones and zeros still being used as the building blocks of computing today? Perhaps. If not, some other innovation likely will. But for National Instruments and so many others like them—not to mention everyone living today—the PC was the window into a whole other world.

And the world was about to become a whole lot more colorful.