'Lo and Behold'
The Greatest Revolution in Communication
When was the last time you squinted through the pages of a bulky Oxford English dictionary to look up the meaning of a word? Can you recall the last time you used one of those pop-out maps to find your way to a place? Or, can you perhaps remember the last time you bought an airplane ticket at a kiosk in an airport?
Chances are, the resources that are currently available on the internet today have completely replaced all those things in the last couple of years. The internet has in fact, played a role in quietly replacing a very long list of household items in the last 2 decades. Today the internet is a seemingly indispensable part of life, and it brings new possibilities to new places every day. But the question is, where did all of this begin? What are the origins of this ubiquitous technology that we seem to take for granted today?
Unlike many technological innovations, the invention of the internet cannot really be credited to a single person. The internet as we know it, is a product of decades of incremental innovation. And guess what? It’s still being invented. The internet that we use 10 years from now may be premised on something entirely different from what it is today.
As unlikely as it may sound, some scholars suggest that the internet was a bi-product that emerged from military research during the Cold War. Some scholars, however, disagree with this idea. While acknowledging that the capabilities of the internet had clear benefits from a militaristic standpoint, they argue that it was the product of a more organic process.
As scientific knowledge grew, libraries grew in size, creating the need for more advanced systems to manage these large mines of information. There was a growing need for better organization and more efficient retrieval of information. However, this didn't seem very viable with physical libraries.
The earliest computers took up entire rooms of space and had multiple technicians working on them. If you’ve watched the film ‘Hidden Figures’ (and if you haven’t, I highly recommend that you do), you would’ve noticed that computers were used to perform complex mathematical calculations of times and distances in space missions. Such computers were also very large in size, taking up entire rooms of space. With computers consuming large amounts of space, capital, and energy, scientists sought to optimize their computing power and resource use. They began by connecting large mainframes to multiple machines that were smaller. These smaller machines could then simultaneously send commands to the large mainframe computers, where multiple commands could be processed at once. They also envisioned the possibility for institutions in different parts of the country to remotely share their scientific research with one another in real-time.
Multiple ideas that emerged from around the world in the 2 decades after World War II formed the theoretical foundations of the internet. The fact that the development of these ideas coincided with the Cold War has led to the enduring belief that the internet emerged as a consequence of the war. While this is not entirely accurate, it is difficult ignore the context of the Cold War, and the role it played in expediting the development of technology that led to the emergence of the internet.
In the late 1950s, the Soviet Union launched the Sputnik I rocket, which became the world’s first manmade satellite in space. While the world watched in awe, America watched with evident unease. It was not just about the race to space, the Americans had begun to wonder just how far ahead their ideological adversaries were in terms of aerospace technology. With renewed rigor, the US government stepped up investment in some advanced fields of science and technology. This included an expansion of the scope of higher education in the field, the provision of grants for corporations and the establishment of institutions that would be dedicated to the advancement of aviation, aerospace and defense. These included the Advanced Research Projects Agency (ARPA) and NASA (National Aeronautics and Space Administration), which were both set up in 1958.
In 1962, at the height of the Cold War between the US and the USSR, the US military was conducting an aerial surveillance of Cuba, when it found something very disconcerting.
Unbeknownst to them, their Soviet-allied neighbor had gradually become a strategic base for Soviet nuclear weapons. Concerns about an imminent attack skyrocketed, and tensions between the two nations heightened, as the military forces of both countries went on high alert.
Fortunately, after a tense, 13-day stand-off which is known as the 'Cuban Missile Crisis,' the USA and the USSR reached an agreement and the USSR withdrew its missiles from Cuba.
However, concerns over potential attacks lingered. A number of questions hung in the air. What if there was an attack that destroyed a communication center? If communication networks went down, how would they coordinate their forces in different parts of the country to launch a retaliatory strike? Were existing communication systems like the telephone and the radio far too vulnerable? Worryingly, it did seem that way.
In traditional radio or telephone systems, networks relied on central transmitters to distribute information to every point on the network. That meant that if a central transmitter failed or got damaged, the entire network became vulnerable. The architecture of older networks was characterized by high dependence on central infrastructure, thereby concentrating the risks at the center. This was a real problem.
The premise for the solution was simple— a decentralized network architecture, or a network in which messages or communications could take multiple routes. If one route hit a snag because of an attack or a technical problem, the message could take a diversion through other nodes in the network to reach its final node. This principle forms the basis for how modern computer networks function even today, with one destination and multiple routes. The possibility of multiple routes served as insurance against any attack or disruption. The threat of an attack had a latent effect— it led to the invention of a radical new technology for communication.
Necessity was indeed the mother of invention, as per the old adage.
In the same year of the Missile Crisis, in a series of memos from the ARPA director, JCR Licklider, the concept of an intergalactic network of computers was propounded, in what was perhaps the first documented description of the idea of communication through networks. In the same year, Paul Baran of RAND Corporation envisaged the idea of a decentralized communication network that could continue to operate even after a nuclear strike (Ridley, 2016, ). The theoretical foundations of the internet had thus been built, leading the way for a series of experiments that attempted to bring these ideas to fruition.
One such experiment was conducted in 1965 by Thomas Merrill and Larry Roberts (Banks, 2008, 181). By connecting one computer in Massachusetts to another in California using a low-speed dial-up telephone line, they created the first, and perhaps smallest Wide Area Network (WAN). The experiment showed that time-shared computers could work well together, but it also showed that circuit-switching was not the most effective approach. The experiment confirmed the suggestion made by computer scientist Leonard Kleinrock that the novel method of packet-switching would be far more effective. Soon after, scientists and engineers began mapping out the architecture of a new network called ‘ARPANET’. This would consist of a network of computers in different parts of the United States that would communicate with one another using packet-switching. After the structure and specifications were decided, an RFQ was released for ARPA’s network hardware. Massachusetts-based hardware company Bolt, Beranek and Newman (BBN) won the contract and built the key components of the network called Interface Message Processors (IMPs).
In 1969, Interface Message Processors (IMPs) were used to send the first message over the internet, or rather, a part of it (Burkeman, 2009). The message was sent from one ARPA computer in UCLA to another in Stanford. The message was simple: the word ‘Login.’ However, only the first two letters went through, and the system crashed shortly after. ‘Lo’ was not the intended message, but it was oddly befitting, as it indeed marked the start of something amazing.
Communication between computers was possible, it just needed a lot more work.
In 1972, Ray Tomlinson of BBN created the first program for sending network mail or email. He also set the convention of separating a username and a domain name with an ‘@’ (D'Orazio 2016). About a year later, Larry Roberts, wrote the first mail-management software that allowed users to send direct replies to mails, file messages and delete messages.
In the following years, more nodes were added to the ARPANET, including some in Europe. Nearly all the nodes in the network belonged to educational institutions. Messages could be transmitted across all computers in this network, and the next logical step was to expand the network further to make it a truly global network. However, with regards to expansion, the network still had some significant limitations. ARPANET had been using the Network Control Protocol (NCP) as the transmission protocol for host-to-host communication in the network. One of the key features of the NCP was that it was designed to ensure that every message in the network reached its destination. The failure of even one message in reaching its destination had the potential to stall and disrupt all activity on the network. Besides that, the NCP only worked with certain types of network hardware such as IMPs. The success of packet-switching led to the quick emergence of many different experimental networks outside of ARPANET. It was clear that the rigidity of the NCP protocol would not be conducive to this rapid, ongoing expansion of computers and networks with varying designs and configurations. There was a need for a more accommodative, universal protocol. So, from 1973, Vinton Cerf and Robert Kahn undertook the task of creating a new protocol. What they eventually developed were the Transmission Control Protocols and Internet Protocols (TCP/IP)—the universal suite of protocols that are used in the internet to this day.
The TCP/IP suite of protocols consisted of multiple layers, such as the Transmission Control layer and the Internet Protocol layer. It was designed with a lot of prudence, as it embodied one of the key features underpinning the internet as we know it today—an open network architecture. In essence, this allows heterogeneous computers and devices with disparate operating systems to act as peers in a network, regardless of function. The TCP/IP protocols were designed with the foresight that multiple packet-switching networks with varying designs would emerge, and there needed to be a way for these networks to communicate with one another.
TCP/IP was the universal language that allowed this communication across many different networks or ‘inter-network’ communication—the essence of what the internet is today.