This is an excerpt from A
Brief History of the Future: the origins of the Internet by
John Naughton. First published by Weidenfeld & Nicolson in
1999. Subsequent revised editions published by Phoenix. (c) John
Naughton 1999.
Chapter 10: Casting the Net
"Network. Anything reticulated or decussated at equal distances, with
interstices between the intersections."
Samuel Johnson , A Dictionary of the English
Language,1755
The strange thing about the ARPANET was that it worked more or less as
advertised from the word go. BBN delivered IMPs at the rate of
one a month and the network grew apace. The University of
California at Santa Barbara got the third one in November 1969 and Utah
took delivery of the fourth early in December. IMP Number 5 was
delivered to Bolt Beranek and Newman itself early in 1970 and the first
cross-country circuit -- a 50 kilobit per second line from Leo
Kleinrock’s machine in Los Angeles to BBN’s in Boston -- was
established. This meant not only that the Net now spanned the
continent, but also that BBN was able to monitor it remotely.
By the Summer of 1970, the graduate students on the Network
Working Group had worked out a provisional version of the Network
Control Program (NCP) -- the protocol which enabled basic
communications between host computers -- and IMPs 6,7,8 and 9 had
been installed at (respectively) MIT, RAND, System Development
Corporation and Harvard. Towards the end of the Summer,
AT&T (whose engineers were presumably still baffled by the
strange uses the researchers had discovered for telephone lines)
replaced the UCLA-BBN link with a new one between BBN and RAND. A
second cross-country link connected the University of Utah and MIT. By
the end of 1971 the system consisted of 15 nodes (linking 23
hosts). In August 1972 a third cross-country line was
added. By the end of the year ARPANET had 37 nodes. The
system was beginning to spread its wings -- or, if you were of a
suspicious turn of mind, its tentacles.
Researchers were also beginning to get a realistic idea of what
you could do with it. They could log in to remote machines, for
example, and exchange files securely. Later in 1970 they
also saw the first stirrings of electronic mail. And, as always,
there were some restless kids who wanted to do some wacky things with
their new toy. In January 1971, for example, two Harvard
students, Bob Metcalfe and Danny Cohen, used a PDP-10 minicomputer to
simulate an aeroplane landing on the flight deck of an aircraft carrier
and then displayed the images on a graphics terminal located down the
Charles River in MIT. The graphics were processed on the MIT
machine and the results (in this case the view of the carrier’s flight
deck) were then shipped back over the Net to the PDP-10 at Harvard,
which displayed them. It was the kind of stunt which makes
non-technical people wonder what students are smoking, but in fact the
experiment had a serious purpose because it showed that the Net could
move significant amounts of data around (graphics files tend to be
large) at a rate which approximated to what engineers call ‘real
time’. Metcalfe and Cohen wrote an RFC3 describing their
achievement under the modest header “Historic Moments in Networking”.
The emerging ARPANET was a relatively closed and homogeneous
system. Access to it was confined to a small elite working in
Pentagon-funded computing laboratories. And although it wove
together a number of varied and incompatible mainframe computers (the
hosts), the subnetwork of IMPs which actually ran the network was
comprised of identical units controlled, updated and debugged from a
single Network Control Center located in Bolt Beranek and Newman’s
offices in Boston. The subnetwork showed its military colours in
that it was designed to be exceedingly efficient and reliable in its
performance and behaviour. Indeed, one of the reasons the
designers chose the Honeywell 516 as the basis for the IMPs was that it
was a machine capable of being toughened for military use -- which
meant that it could also (in theory) be secured against the curiosity
of graduate students. (This was before the folks at ARPA or BBN
realised how crucial those students would be in getting the damn thing
to work.)
The modern Internet is significantly different from the system
that BBN built. In the first place, it is comprised of an
unimaginable variety of machines of all ages, makes and sizes, running
a plethora of operating systems and communications software.
Secondly, it is in fact a network of networks: the components of the
modern Internet are themselves wide-area networks of various
sorts. There is no Network Control Centre probing the nodes,
installing software updates from afar and generally keeping an eye on
things. And yet, in the midst of all this headless, chaotic
variety, there is order. The system works. Packets get from
one end of the world to the other with astonishing speed and
reliability.
If you have any doubts, try this. I have on my laptop a
lovely little program called PING. Its purpose is to send out
test packets to a destination anywhere on the Net in order to test how
reliably they reach their destination and how long they take in
transit. Now let’s PING a node in San Francisco: it’s
www.kpix.com, the site which provides those live pictures of the Bay
Area which first captured my attention. After 22 pings, I call a
halt and the program summarises the results in a table:
www.kpix.com
204.31.82.101
Sent=22
Received=20
Packet loss=9.09%
Min rtt=441
Max rtt=650
Avg rtt=522
The first row indicates that the site address has been
translated into the underlying Internet address -- the set of four
numbers which uniquely identifies the computer to be PINGed. The
second row shows that 20 of the 22 packets (i.e. 90.91 per cent)
reached their destination. The third row reveals that the minimum
time for the round trip was 441 milliseconds (thousandths of a second),
the maximum was 650 and the average worked out at 522. That is to
say, it took, on average, just over half a second for a packet to
travel from my study in Cambridge, UK to KPIX’s machine in San
Francisco and back again.
Looking back at the ARPANET from the vantage point of the
contemporary Net is a bit like comparing a baby chimpanzee with a human
child . The resemblances are striking and the evolutionary link
is obvious. It is said that something like 98 per cent of the
genetic material of both species is indistinguishable. Yet we are
very different from chimps.
So it is with the two networks. The Internet has inherited
many of its dominant characteristics from its pioneering ancestor, but
it is significantly different in one respect: its ability to
diversify. ARPANET could never have expanded the way the Internet
has: its design required too much control, too much standardisation for
it to match the variegated diversity of the online world. For it
to metamorphose into the network which now girdles the globe, something
had to change.
-oOo-
In a way, the ARPANET was a ‘proof of concept system’. It took a
set of ideas many people thought impracticable, and created a working
network out of them. By 1972 it was clear that the project was,
in technical terms, a runaway success. But as many inventors
know, designing a better mousetrap is no guarantee that the world will
beat a path to your door. Even geniuses have to blow their own
trumpets sometimes.
Within BBN, one man -- Robert Kahn -- understood this better
than most. He was born in Brooklyn, New York, in 1934, got a
Bachelor’s degree in electrical engineering from City College of New
York and then went to Princeton where he picked up a Master’s and a
doctorate. In 1964 Kahn went to MIT as an assistant professor,
and two years later took temporary leave of absence (as many MIT
engineering faculty members did) to work at BBN. He never made it
back to MIT, because when the company decided to bid for the ARPANET
contract, Kahn was persuaded to take on the job of overall system
design and was hooked for life.
Kahn was a systems engineer, not a hacker. While others
were concerned with the individual components which made up the
network, he was the guy who knew that systems were more (and often
less) than the sum of their parts – that the whole network would behave
in ways that could not be predicted from a study of its individual
components. His professional instincts told him, for example,
that the routing algorithms – the procedures which governed the way the
IMPs processed packets – would be critical. “It was my
contention”, he recalled, “that we had to worry about congestion and
deadlocks”.
What do you do when the network just fills up? Things might
come to a grinding halt. I was busy at work designing mechanisms
that would prevent that from happening or, if it did happen, that would
get you out of it. And the prevailing feeling of my colleagues
was it's like molecules in a room; don't worry about the fact that you
won't be able to breathe because the molecules will end up in a
corner. Somehow there will be enough statistical randomness that
packets will just keep flowing. It won't be able to somehow block
itself up.
Determined to prove the optimists wrong, Kahn and a colleague
named Dave Walden went out to California early in 1970 to test the
fledgling Net to destruction. “The very first thing that we did”,
he remembers,
"was run deadlock tests. And the network locked up in twelve
packets. I had devised these tests to prove that the network
could deadlock. There was no way to convince anybody else,
particularly the person writing the software, that the network was
going to deadlock - except by doing it."
From the outset Kahn had been of the view that the network
project should be a large scale experiment, not something cooked up in
the rarefied atmosphere of an individual lab or a number of
geographically-proximate institutes. He felt that limited systems
might not scale up and wanted a continent-wide network, using
long-distance phone lines, from the word go. And his view
prevailed, which is why from early in the project the ARPANET spanned
the continental United States.
In the middle of 1971 Kahn turned his mind to the problem of
communicating this astonishing technical achievement to the movers and
shakers of the US political, military, business and telecommunications
communities. After a meeting at MIT of some of the main
researchers involved on the project, it was decided that the thing to
do was to organise a large-scale, high-profile, live demonstration of
what the network could do. Casting round for a suitable venue, he
hit on the first International Conference on Computer Communication,
scheduled to be held in the Washington Hilton in October 1972, and
negotiated with the organisers an agreement that ARPA could mount a
huge, live exhibition of the network in action.
The goal of the exhibition was to be the most persuasive
demo ever staged -- “to force”, in Kahn’s words, “the utility of the
network to occur to the end users”. It was to be the event that
made the world take notice of packet switching, because up to that
point the technology had been more or less invisible outside of the
elite circle of ARPA-funded labs. “A lot of people were sceptical
in the early days”, said Kahn.
"I mean, breaking messages into packets, reassembling them at the end,
relying on a mysterious set of algorithms, routing algorithms, to
deliver packets. I'm sure there were people who distrusted
airplanes in the early days. "How are you going to ensure that
they are going to stay up?" Perhaps this was the same kind of
thing.
Kahn and his team put an enormous effort into mounting the
demo. They toured the country bullying, cajoling, tempting
researchers and computer equipment manufacturers into
participating. On the days before the conference opened, hackers
gathered from all over the country to begin the nightmarish task of
assembling and testing all the kit in the designated hall of the hotel.
The atmosphere was slightly hysterical as the clock ticked away and
computer equipment behaved in its customary recalcitrant way whenever a
live demo approaches. Everyone was “hacking away and hollering
and screaming” recalls Vint Cerf, one of the graduate students
involved. Kahn himself observed later that if someone had dropped
a bomb on the Washington Hilton during the demo it would have wiped out
the whole of the U.S. networking community in a single strike.
Anyone who has ever relied on a computer network for a critical
presentation knows what a high-risk gamble this was. And yet it
paid off: the system worked flawlessly10 and was seen by thousands of
visitors. The Hilton demo was the watershed event that made
powerful and influential people in the computing, communications and
defense industries suddenly realise that packet switching was not some
toy dreamed up by off-the-wall hackers with no telecoms experience, but
an operational, immensely powerful, tangible technology.
-oOo-
So the ARPANET worked. The question for Kahn (and indeed for the
agency) was: what next? Answer: the world. But how?
The ARPANET model was not infinitely extensible for the simple reason
that extending it would have required everyone to conform to the
requirements of the US Department of Defense. And other people
had other ideas. Indeed, even as the ARPANET was being built,
other packet-switched networks had begun to take shape. The
French, for example, had begun work on their Cyclades network under the
direction of a computer scientist called Louis Pouzin. And Donald
Davies’s team at the British National Physical Laboratory were pressing
ahead with their own packet-switched network.
Within the United States also, people had constructed
alternative systems. In 1969, ARPA had funded a project based at
the University of Hawaii, an institution with seven campuses spread
over four islands. Linking them via land-lines was not a feasible
proposition, so Norman Abramson and his colleagues Frank Kuo and
Richard Binder devised a radio-based system called ALOHA. The
basic idea was to use simple radio transmitters (akin to those used by
taxi-cabs) sharing a common radio frequency. As with ARPANET,
each station transmitted packets whenever it needed to. The
problem was that because the stations all shared the same frequency
sometimes the packets ‘collided’ with one another (when two or more
stations happened to transmit at the same time) with the result that
packets often got garbled. Abramson and his colleagues got round
this by designing a simple protocol: if a transmitting station failed
to receive an acknowledgement of a packet, it assumed that it had been
lost in transmission, waited for a random period11 and then
re-transmitted the packet.
ARPA was interested in this idea of radio links between
computers -- for obvious reasons. There was a clear military
application -- an ALOHA-type system based on radio transmitters fitted
in moving vehicles like tanks could have the kind of resilience that
centralised battlefield communications systems lacked. But the
limited range of the radios would still pose a problem, necessitating
relay stations every few miles -- which re-introduced a level of
vulnerability into the system. This led to the idea of using
satellites as the relay stations -- and in the end to the construction
of SATNET, a wide-area network based on satellites. In
time, SATNET linked sites in the US with sites in Britain,
Norway, Germany and Italy. Another attraction of these systems is
that they offered far greater flexibility in the use of shared (not to
mention scarce and expensive) transmission capacity.
As these other packet-switching networks developed, Bob Kahn
(who had now moved from BBN to ARPA) became increasingly preoccupied
with the idea of devising a way in which they and ARPANET could all be
interlinked. This was easier said than done. For one thing,
the networks differed from one another in important respects -- the
size of a packet, for example. More importantly, they differed
greatly in their reliability. In the ARPANET, the destination IMP
(as distinct from the host computer to which it was interfaced) was
responsible for reassembling all the parts of a message when it
arrived. IMPs made sure that all the packets got through by means
of an elaborate system of hop-by-hop acknowledgements. They also
ensured that different messages were kept in order. The basic
protocol of the ARPANET -- the Network Control Program -- was therefore
built on the assumption that the network was reliable.
This did not hold for the non-ARPA networks, where the governing
assumption was, if anything, exactly the opposite. There is
always interference in radio transmissions, for example, so the ALOHA
system had to be constructed on the assumption that the network was
inherently unreliable, that one simply couldn’t count on a packet
getting through. If no acknowledgement was received, a
transmitting host would assume it had got lost and dispatch an
identical packet; and it would keep doing this until it received
acknowledgement of receipt. Also the non-ARPANET systems differed
in other fundamental respects -- right down to what they regarded as
the size of a standard packet and the fact that they had different
rates of transmission. Clearly linking or ‘internetting’ such a
motley bunch was going to be difficult.
To help solve this problem, Kahn turned to a young man who had
been with the ARPANET project from the beginning. His name was
Vinton (‘Vint’) Cerf.
The partnership between Kahn and Cerf is one of the epic
pairings of modern technology, but to their contemporaries they must
have seemed an odd couple. Kahn is solid and genial and very
middle-American -- the model for everyone’s favourite uncle -- who even
as a young engineer radiated a kind of relaxed authority. Cerf is
his polar opposite -- neat, wiry, bearded, cosmopolitan, coiled like a
spring. He was born in California, the son of an aerospace
executive, and went to school with Steve Crocker, the guy who composed
the first RFC. His contemporaries remember him as a wiry, intense
child whose social skills were honed (not to say necessitated) by his
profound deafness. They also remember his dress style -- even as
a teenager he wore a suit and tie and carried a briefcase. “I
wasn't so interested in differentiating myself from my parents”, he
once told a reporter, “but I wanted to differentiate myself from the
rest of my friends just to sort of stick out”. Coming from a
highly athletic family, he also stood out as a precocious bookworm who
had taught himself computer programming by the end of tenth grade and
calculus at the age of 13. His obsession with computers was such
that Crocker once claimed Cerf masterminded a weekend break-in to the
UCLA computer centre simply so that they could use the machines.
Cerf spent the years 1961-’65 studying mathematics at Stanford,
worked for IBM for a while and then followed Crocker to UCLA where he
wound up as a doctoral student in Leo Kleinrock’s lab, the first node
on the ARPANET. In this environment Cerf’s neat, dapper style set
him apart from the prevailing mass of untidy hackers. He was one
of the founder members of the Network Working Group, and his name
figures prominently in the RFC archive from the beginning. He and
Kahn had first worked together in early 1970 when Kahn and Dave Walden
conducted the tests designed to push the ARPANET to the point of
collapse in order to see where its limits lay. “We struck up a
very productive collaboration”, recalled Cerf. "He would ask for
software to do something, I would program it overnight, and we would do
the tests.... There were many times when we would crash the
network trying to stress it, where it exhibited behavior that Bob Kahn
had expected, but that others didn't think could happen. One such
behavior was reassembly lock-up. Unless you were careful about how you
allocated memory, you could have a bunch of partially assembled
messages but no room left to reassemble them, in which case it locked
up. People didn't believe it could happen statistically, but it did."
The other reason Kahn wanted Cerf for the internetworking
project was because he had been one of the students who had devised the
original Network Control Protocol for the ARPANET. Having got him
on board, Kahn then set up a meeting of the various researchers
involved in the different networks in the US and Europe. Among
those present at the first meeting were Donald Davies and Roger
Scantlebury from the British National Physical Laboratory, Remi Despres
from France, Larry Roberts and Barry Wessler from BBN, Gesualdo LeMoli
from Italy, Kjell Samuelson from the Royal Swedish Institute, Peter
Kirstein from University College, London, and Louis Pouzin from the
French Cyclades project.
“There were a lot of other people”, Cerf recalls, "at least
thirty, all of whom had come to this conference because of a serious
academic or business interest in networking. At the conference we
formed the International Network Working Group or INWG. Stephen
Crocker, …, didn’t think he had time to organize the INWG, so he
proposed that I do it."
Having become Chairman of the new group, Cerf took up an
Assistant Professorship in Computer Science at Stanford and he and Kahn
embarked on a quest for a method of creating seamless connections
between different networks. The two men batted the technical
issues back and forth between them for some months, and then in the
Spring of 1973, sitting in the lobby of a San Francisco hotel during a
break in a conference he was attending, Cerf had a truly great
idea. Instead of trying to reconfigure the networks to conform to
some overall specification, why not leave them as they were and simply
use computers to act as ‘gateways’ between different systems? To
each network, the gateway would look like one of its standard
nodes. But in fact what the gateway would be doing was simply
taking packets from one network and handing them on to the other.
In digital terms, this was an idea as profound in its
implications as the discovery of the structure of the DNA molecule in
1953, and for much the same reasons. James Watson and Francis Crick
uncovered a structure which explained how genetic material reproduced
itself; Cerf’s gateway concept provided a means by which an ‘internet’
could grow indefinitely because networks of almost any kind could be
added willy-nilly to it. All that was required to connect a new
network to the ‘network of networks’ was a computer which could
interface between the newcomer and one network which was already
connected.
In some ways, the idea of using gateways was analogous to Wesley
Clark’s 1967 brainwave of using IMPs as intermediaries between host
computers and the network. But thereafter the analogy breaks down
because ARPANET hosts were absolved of responsibility for ensuring the
safe arrival of their messages. All they had to do was to get the
packets to the nearest IMP; from then on the sub-network of IMPs
assumed responsibility for getting the packets through to their
destinations. The Network Control Protocol on which the network
ran was based on this model. But it was clear to Cerf and Kahn
that this would not work for an ‘internet’. The gateways could
not be expected to take responsibility for end-to-end transmission in a
variegated system. That job had to be devolved to the
hosts.
And that required a new protocol.
-oOo-
In a remarkable burst of creative collaboration, Cerf and Kahn laid the
foundations for the ‘network of networks’ during the Summer and Autumn
of 1973. In traditional ARPANET fashion, Cerf used his graduate
students at Stanford as sounding-boards and research assistants, and
frequently flew to Washington where he and Kahn burned much midnight
oil on the design. In September they took their ideas to a
meeting of the INWG at Sussex University in Brighton, UK and refined
them in the light of discussions with researchers from Donald Davies’s
and Louis Pouzin’s labs. Then they returned to Washington and
hammered out a draft of the scientific paper which was to make them
household names in the computing business. It was a joint
production, written, Cerf recalled, with “one of us typing and the
other one breathing down his neck, composing as we’d go along, almost
like two hands on a pen”. By December, the paper was finished and
they tossed a coin to see who would be the lead author. Cerf won
-- which is why he has ever since been popularly known as “the father
of the Internet”.
“A Protocol for Packet Network Interconnection” by Vinton G.
Cerf and Robert E. Kahn was published in a prominent engineering
journal in May 1974. It put forward two central ideas.
One was the notion of a gateway between networks which
would understand the end-to-end protocol used by the hosts that were
communicating across the multiple networks. The other was that
packets would be encapsulated by the transmitting host in electronic
envelopes (christened ‘datagrams’) and sent to the gateway as
end-to-end packets called ‘transmission-control-protocol’ or TCP
messages. In other words, whereas the ARPANET dealt only in
packets, an internet would deal with packets enclosed in virtual
envelopes.
The gateway, in the Cerf-Kahn scheme, would read only the
envelopes: the contents would be read only by the receiving host.
If a sending host did not receive confirmation of receipt of a message,
it would retransmit it -- and keep doing so until the message got
through. The gateways -- unlike the IMPs of the ARPANET -- would
not engage in retransmission. “We focused on end-to-end
reliability”, Cerf said. The motto was “don’t rely on anything
inside those nets. The only thing that we ask the net to do is to
take this chunk of bits and get it across the network. That’s all
we ask. Just take this datagram and do your best to deliver it”.
The TCP idea was the electronic equivalent of the
containerisation revolution which transformed the transport of
international freight. The basic idea in the freight case was
agreement on a standard size of container which would fit ships’ holds,
articulated trucks and rail wagons. Almost anything could be
shipped inside a container, and special cranes and handling equipment
were created for transferring containers from one transport mode to
another. In this analogy, the transport modes (sea, road, rail)
correspond to different computer networks; the containers correspond to
the TCP envelopes; and the dockside and trackside cranes correspond to
the Cerf-Kahn gateways. And just as the crane doesn’t care what’s
inside a container, the computer gateway is unconcerned about the
contents of the envelope. Its responsibility is to transfer it
safely onto the next leg of its journey through Cyberspace.
-oOo-
In July 1975, the ARPANET was transferred by DARPA to the Pentagon’s
Defense Communications Agency as a going concern. Since the
agency’s prime mission was to foster advanced research, not run a
network on a day-to-day basis, it had been trying to divest itself of
the Net for some time. Having done so, it could concentrate on
the next major research task in the area -- which it defined as the
‘internetting’ project.
The first actual specification of the TCP protocol had been
published as an Internet Experiment Note in December 1974. Trial
implementations of it were begun at three sites -- Stanford, BBN and
University College, London -- so the first efforts at developing the
Internet protocols were international from the very beginning.
The earliest demonstrations of TCP in action involved the
linking of the ARPANET to packet radio and satellite networks.
The first live demo took place in July 1977. A researcher drove a
van on the San Francisco Bay-shore Freeway with a packet radio system
running on an LSI-11 computer. The packets were routed over the
ARPANET to a satellite station, flashed over the Atlantic to Norway and
thence via land-line to University College, London. From London
they travelled through SATNET across the Atlantic and back into the
ARPANET, which then routed them to a computer at the University of
Southern California. “What we were simulating”, recalls Cerf,
"was someone in a mobile battlefield environment going across a
continental network, then across an intercontinental satellite network,
and then back into a wireline network to a major computing resource in
national headquarters. Since the Defense Department was paying for
this, we were looking for demonstrations that would translate to
militarily interesting scenarios. So the packets were traveling 94,000
miles round trip, as opposed to what would have been an 800-mile round
trip directly on the ARPANET. We didn't lose a bit!"
-oOo-
The Cerf-Kahn TCP proposal was a great conceptual breakthrough but in
itself it was not enough to enable reliable communications between
wildly different networks. In fact it took six years of intensive
discussion and experimentation to develop the TCP concept into the
suite of inter-related protocols which now governs the Internet.
These discussions are chronicled in the archive of Internet
Experiment Notes – the Internet equivalent of ARPANET’s ‘Request for
Comment’ papers. The record suggests that the evolution of the
TCP protocol into its present form was driven by two factors – the
intrinsic limitations of the original Cerf-Kahn concept, and the
practical experience of researchers at the Xerox Palo Alto Research
Center (PARC), the lab which Bob Taylor had set up after he left ARPA
and which invented much of the computing technology we use today – from
graphical user interfaces like Microsoft Windows or that of the Apple
Macintosh, to Ethernet local area networking and laser printing.
The PARC people were deep into networking for the simple reason
that they couldn’t avoid it. Having decided years earlier that
computers should have graphic displays (rather than just displaying
characters on a screen) they had faced the problem of how the contents
of a screen could be printed. This they solved by inventing the
laser printer, a wonderful machine which could translate a pattern of
dots on a screen into an equivalent pattern on paper. But this in
turn raised a new problem – how to transmit the screen pattern to the
printer. At a resolution of 600 dots per inch, for example, it
takes something like 33 million bits to describe a single A4
page! The PARC guys were then placed in the absurd situation of
having a computer which could refresh a screen display in one second, a
printer which could print the page in two seconds, and a cable between
the two which took nearly 15 minutes to transfer the screen data for
that page from one to the other.
The Ethernet local area networking system was PARC’s solution to
the transmission problem. It was invented in 1973 by a group led
by Bob Metcalfe – the Harvard graduate student who used to test the
ARPANET by shipping computer-game graphics across it – and inspired by
the ALOHA packet radio system. Like the Hawaiian system, Ethernet
used packets to transfer data from one machine to another, and it
adapted the same approach to the problem of packet collision: each
device on the network listened until the system was quiet, and then
dispatched a packet. If the network was busy, the device waited
for a random number of milliseconds before trying again. Using
these principles, Metcalfe & Co designed a system that could ship
data down a coaxial cable at a rate of 2.67 million bits per second –
which meant that the time to transmit an A4 page from computer to
printer came down from 15 minutes to about 12 seconds and local area
networking was born.
With this kind of performance, it was not surprising that PARC
rapidly became the most networked lab in the U.S. Having a fast
networking technology meant that one could rethink all kinds of basic
computing assumptions. You could, for example, think about
distributed processing – where your computer subcontracted some of its
calculations, say, to a more powerful machine somewhere else on the
network. And in their quest to explore the proposition that “the
network is the computer”, PARC researchers attached all kinds of
devices to their Ethernets – from fast printers and ‘intelligent’
peripherals to slow plotters and dumb printers.
More significantly, as the lab’s local area networks
proliferated, the researchers had to build gateways between them to
ensure that networked resources would be available to everyone.
And this in turn meant that they had to address the question of
protocols. Their conclusion, reached sometime in 1977, was that a
single, one-type-fits-all protocol would not work for truly
heterogeneous internetworking. Their solution was something
called the PARC Universal Packet (forever afterwards known as Pup)
which sprang from their need for a rich set of layered protocols,
allowing different levels of service for different applications.
Thus, you could have simple but unreliable datagrams (very useful in
some situations), but could also have a higher level of functionality
which provided complete error control (but perhaps lower performance).
As it happens, some people on the INWG were moving towards the
same conclusion – that a monolithic TCP protocol which attempted to do
everything required for internetworking was an unattainable
dream. In July 1977, Cerf invited the PARC team, led by John
Shoch, to participate in the Group’s discussions. They came with
the authority of people who had not only been thinking about
internetworking but actually doing it for real. In the end, TCP
went through four separate iterations, culminating in a decision
to split it into two new protocols: a new Transaction Control Protocol
and an Internet Protocol (IP).
The new TCP handled the breaking up of messages into packets,
inserting them in envelopes (to form what were called ‘datagrams’),
reassembling messages in the correct order at the receiving end,
detecting errors and re-transmitting anything that got lost.
IP was the protocol which described how to locate a specific
computer out of millions of interconnected computers, and defined
standards for transmitting messages from one computer to another. IP
handled the naming, addressing, and routing of packets and shifted the
responsibility of error free transmission from communication links
(gateways) to host computers.
The evolving suite (which eventually came to encompass upwards
of a hundred detailed protocols) came to be known by the generic term
‘TCP/IP’. Other parts of the terminology evolved too: the term
‘gateway’, for example, eventually came to be reserved for computers
which provided bridges between different electronic mail systems, while
the machines Cerf-Kahn called gateways came to be called
‘routers’. The model for the Internet which they conceived
therefore became that of an extensible set of networks linked by
routers.
Military interest kept the internetting research going through
the late 1970s and into the early 1980s. By that time there were
so many military sites on the network, and they were using it so
intensively for day-to-day business, that the Pentagon began to worry
about the security aspects of a network in which military and
scientific traffic travelled under the same protocols. Pressure
built up to split the ARPANET into two networks – one (MILNET)
exclusively for military use, the other for the original civilian
crowd. But because users of both networks would still want to
communicate with one another, there would need to be a gateway between
the two – which meant that there suddenly was an urgent practical need
to implement the new internetworking protocols.
In 1982 it was decided that all nodes connected to the ARPANET
would switch from the old NCP to TC/IP. Since there was some
reluctance in some sites to the disruption this would cause, a certain
amount of pressure had to be applied. In the middle of 1982, NCP
was ‘turned off’ for a day -- which meant that only sites which had
converted to the new protocol could communicate. “This was used”,
Cerf said, “to convince people that we were serious”. Some sites
remained unconvinced so in the middle of the Autumn NCP was disabled
for two days. After that, the ARPANET community seems to have
been persuaded that the inevitable really was inevitable and on January
1, 1983, NCP was consigned to the dustbin of history. The future
belonged to TCP/IP.
Cerf recalls the years 1983-’85 as “a consolidation
period”. The great breakthrough came in 1985 when -- partly as a
result of DARPA pressure -- TCP/IP was built into the version of the
Unix operating system developed at the University of California at
Berkeley. It was eventually incorporated into the version of Unix
adopted by workstation manufacturers like Sun – which meant that TCP/IP
had finally made it to the heart of the operating system which drove
most of the computers on which the Internet would eventually run.
The Net’s digital DNA had finally been slotted into place.
<strong>If you've enjoyed this excerpt, why not try the
book?