Many sceptics have predicted the Imminent Death of the Net, being variously attributed to government censorship, the "about to be totally accepted by everybody" OSI protocols, total and irretrievable collapse of the Internet Backbone, the exhaustion of the IP address space, the unveiling of private networks Much Better(tm) then the Internet (e.g MSN), and the obvious fact that ordinary people wouldn't be able to use it without a computer science degree. In spite of all this, the Internet not only has failed to roll over and die, but has become something far greater than its proponents ever expected. How has it done so? The primary reason is due to the fundamentally solid technical foundations and engineering instilled into it from early days. The Internet was not designed, it was engineered. And in fact re-engineered at key times. The critical factor is that the Internet infrastructure can take advantage of new technology without disruption, and as new technologies become available, they are used to regenerate older portions. The Internet faces a grand challenge of maintaining this record of growth. What technology will be used in the future to allow the Internet to sustain the combined load of all the applications that are being mooted? What should Network Planners be installing to allow their organisation to actively take advantage of the killer applications being designed and deployed now and in the future?
In April 1995 the NSFnet disbanded in order to allow commercialisation of the core to take place.
The following graph shows the traffic figures and extrapolations from that time:
At this present moment we are well off the scale, providing an indication of the exponential nature of the growth.
A number of factors are driving this growth:
It is impossible to predict the true requirements of the Internet infrastructure, as the introduction of faster and faster links makes new applications possible. This is a market that is expanding into a vacuum, and has not reached any bounding factor except cost of bandwidth. The challenge facing the organisations dealing with the core of the Internet is knowing what technologies need to be in place to deal with this growth.
A key concept is the emergence of the Intranet, or an Internet within an enterprise. These Intranets use the widely available tools and applications originally designed for the Internet (such as routers and web browsers) to provide easily accessible data to all levels of an organisation.
cisco Systems itself is a prime example, where a large amount of data is available via internal web sites. Nearly all data that needs to be accessed such as documentation, project information, database records are obtainable via web pages. This trend will grow rapidly as more gateways become available to proprietary database systems and other former islands of information. Once organisations start to internally rearrange their data to allow business to be performed via the web, it is a simple step to open this up to allow external business to be conducted via the same interface.
The requirement that this brings about is to provide a stable and common enterprise network that can interoperate with many different hosts, legacy systems, and desktop systems. The need is not only to be stable, but to be fast. Most organisations cannot accept this new technology if it requires a large cost to install, so existing infrastructures must be incorporated seamlessly.
The use of twisted pair Ethernet (10BaseT) has been very widespread, and continues to grow. The development of Fast Ethernet (100BaseT) has been made possible through the use of Ethernet switches which allow traffic isolation i.e. since 100BaseT uses the same packet framing as 10BaseT, it is simple to integrate the two. The fact that 100BaseT can run on the same twisted pair means that organisations have a protected investment and a smooth upgrade path to faster networks.
This has led to the use of 100BaseT as a backbone interconnect, allowing 10BaseT to be used to the desktop, and 100BaseT to servers and as backbones within a local area such as a building. The use of virtual LANs within switches allows workgroups to be separate, and protocols such as ISL (Inter Switch Link) across Fast Ethernet may be used to route between virtual LANs.
The next step will be the introduction of 100BaseT to the desktop; the fact that the same wiring is used is a huge advantage. Fast Ethernet has the bandwidth to allow a new range of applications to be brought to the desktop such as full motion video etc. Since each 100BaseT link is a point-to-point link between the end station and switch, it may operate full duplex without suffering collisions like a collapsed backbone hub would. This allows full use of the 100 Mbits in both directions. Fast Ethernet will spell doom for more complex and expensive backbone networks such as FDDI.
Once the desktop obtains 100 Mbits, the backbone interconnects will be stressed if they remain at 100 Mbits. In this arena it was expected that ATM would start to make a major impact. Due to a number of reasons, this has not eventuated as expected; ATM is still considered a technology with a low penetration of the Enterprise backbone market. A driving factor was the integration of voice, data and video, but cost and complexity have been major inhibitors. The 1996 Global IT Survey by IDC indicates that only about 5% of users are considering any use of ATM.
The emergence of the next generation Ethernet, known as Gigabit Ethernet will likely place a few nails in the coffin of ATM as a local Enterprise backbone. The first Gigabit Ethernet solutions will arrive over the next 12 months, and it is expected that this technology will become the backbone interconnect of choice over FDDI and ATM (in conjunction with Fast Ethernet).
Within the next twelve to eighteen months, it is expected that OC-48 technology (2.4 Gbits/sec) will be deployed within the core of the Internet. This places a large demand on the actual switching and routing equipment used within this network. For example, an OC-3 link from Stockholm to Sydney (using a bandwidth by round-trip-time factor) would require 6 Mbytes of buffering, and requires packet switch times in the order of 4 microseconds. OC-48 is 16 times higher bandwidth than OC-3, with a corresponding increase in support requirements.
I don't expect many organisations have to deal with the issues that are facing the Internet core providers, or the vendors that are working with these providers to deliver working solutions. The problems facing ordinary mortals are usually related to the lack of digital capability within the current telephone network. It is interesting to see that Telstra are seeing major problems with users tying up phone circuits with modem calls; if Telstra had the forethought to provide a reasonably priced digital network using technology such as frame-relay or ISDN, then these problems would not exist. I suspect there are few sympathisers, just people installing POTS lines and running multi-link PPP over multiple modems to try and get decent bandwidth without paying extraordinary sums of money.
The availability of reasonably priced digital capacity is key to developing the local Internet infrastructure successfully, and telecommunication authorities must be willing to provide these capabilities.
The ones who are getting worried are the backbone providers, who are considering what it means to have their end users connected via a 6 Mbit link instead of a 28.8 Kbit modem.
It is clear that the Internet will survive for long time to come, even if it evolves into something quite different to what we are used to.
The first observation is in spite of the Chicken Littles, the sky hasn't fallen, at least not in the manner predicted. The concern (and a valid concern) was raised several years ago of the rapid exhaustion of the IP address space. An ancilliary concern was the explosion of IP routes, and whether the backbone routers could cope with the exponential addition of new routes.
In the time-honoured Internet tradition, the problem was examined from an engineering perspective, and several actions were set in place to allieviate the problems:
Another factor has been the development of Network Address Translation (NAT) applications, where hosts behind a firewall or network gateway have their (presumably not officially allocated) IP address translated into an official address from a small pool available to the gateway. In this way, a large number of hosts can share a relatively small number of IP addresses. The situations where NAT is most useful are:
Both CIDR and NAT have provided breathing space to allow the development and the deployment of the next generation of IP, IPv6. It will take some time for IPv6 to penetrate the Internet mainstream, but there are some compelling technical advantages over the current version of IP that may mean rapid acceptance.
The deployment of IPv6 will be of little interest to most users of the Internet; depending on the need, IPv6 may start to operate on the Internet backbones over the next two years with the current IP protocol being used at the edges for some time to come.
This trend could be summarised as one of increasing specialisation. To handle the performance of the networks being deployed, routers and switches need to contain custom hardware and software quite distinct from hosts. It is expected that this trend will undoubtedly continue as technologies such as Gigabit Ethernet become common. It is expected that higher end switches will fuse routing and switching to provide seamless network operation. Certainly as the demand for higher and higher speeds continue, the technology demands on interconnecting devices will be major.
Another infrastructure change is the high level of aggregation through channelised high speed connections such OC-3, allowing many distinct circuits to be processed with a single physical connection. This has already been in place for some time with the ISDN Primary Rate Interface, but the use of high speed connections has been limited.
The integration of other services such as voice and video has been touted for a long time, without a great deal of realisation. The fundamental problem has been one of perspective; the telecommunications industry sees that data is just another service to be provided along with other services such as voice and video, and the Internetworking world sees that a common data networking platform could support other services such as voice or video. Until this dichotomy is resolved, there will always be mismatching of protocol layering e.g trying to provide video/voice quality of service using ATM cells, and also trying to efficiently run protocols over it with their own ideas about quality of service.
The table below provides some indication of the problem. This shows the required switching times for a minimum sized (64 byte) packet.
Packet Switching Times | ||
---|---|---|
Media |
Speed |
Switching Time |
10BaseT | 10 Mbits/sec | 67 microseconds |
100BaseT | 100 Mbits/sec | 6.7 microseconds |
OC-3 | 155 Mbits/sec | 4.3 microseconds |
Gigabit Ethernet | 1 Gbits/sec | 670 nanoseconds |
OC-48 | 2.4 Gbits/sec | 280 nanoseconds |
Given the above times, it is hard to see how current mechanisms will scale to perform the task at hand.
Techniques will be described about ways that new routing technology can be used to reduce the routing overhead considerably and speed up the switching of packets. Without these techniques, the Internet will not scale to cope with the expected demand.
Routing protocols have developed greatly over the last few years, with the advent of link state protocols, border protocols and route aggregation at demarcation routers. Routing protocols have evolved with the Internet, and periodically new generations of routing protocols have to be introduced that are capable of scaling a thousand fold. It is expected that as the Internet grows, the routing protocols will have design goals of network stability and policy management as well as fast convergence. Particular attention will be paid to controlling the dampening of the system so that route flaps or changes do not cause major network reconvergence or disruption.
Nevertheless, the future growth of the Internet will highlight more and more the necessity to migrate to new switching techniques; the core of the Internet will change radically as very high speed backbone lines come into play, and the advent of new user access devices such as cable modems or xDSL will open up the Internet to a much wider audience. This in turn will fuel new opportunities for applications. Underlying all this, the Internet infrastructure will require continual engineering to allow it to scale as needed.
One thing is sure, we live in interesting times.