In This Issue: December 2000 Volume 3, Number 4
In This Issue: December 2000 Volume 3, Number 4
In This Issue: December 2000 Volume 3, Number 4
In This Issue Numerous technologies have been developed to protect or isolate corpo-
rate networks from the Internet at large. These solutions incorporate
From the Editor .......................1 security, either end-to-end (IP security, or IPSec), or at the Internet/intra-
net border (firewalls). A third class of systems allows a range of IP
addresses to be used internally in a corporate network, while preserving
The Trouble with NAT ...........2 IP address consumption through the use of a single public address. This
latter class of device is called a Network Address Translator (NAT), and
The Social Life of Routers .....14 while many Internet engineers consider NATs to be “evil,” they are
nonetheless very popular. Combining IPSec, NATs, and firewalls can be
quite challenging, however. In our first article Lisa Phifer explains the
New Frontiers for problem and offers some solutions.
Research Networks................26
Successful network design is the result of many factors. In addition to
the basic building blocks of routers, switches and circuits, network plan-
Book Review..........................40
ners must carefully consider how these elements are interconnected to
form an overall system with as few single points of failure as possible. In
Call for Papers .......................41 our second article, Valdis Krebs looks at how lessons learned from so-
cial network analysis can be applied to the design of computer
networks.
Fragments ..............................42
The current Internet grew out of several government-funded research ef-
forts that began in the late 1960s. Today, basic technology development
as well as research into new uses of computer networks continues in
many research “testbeds” all over the world. Bob Aiken describes the
past, present and future state of network research and research
networks.
The online subscription system for this journal will be up and running in
January at www.cisco.com/ipj. In addition to offering a subscription
form, the system will allow you to select delivery options, update your
mailing and e-mail address, and much more. Please visit our Web site
and give it a try. If you encounter any difficulties, please send your com-
ments to ipj@cisco.com.
T
hose who are implementing virtual private networks often ask
whether it is possible to safely combine IP Security (IPSec) and
Network Address Translation (NAT). Unfortunately, this is not
a question with a simple “yes” or “no” answer. IPSec and NAT can be
employed together in some configurations, but not in others. This arti-
cle explores the issues and limitations associated with combing NAT
and “NAT-sensitive” protocols like IPSec. It examines configurations
that do not work, and explains why. It illustrates methods for using
NAT and IPSec together, and discusses an emerging protocol that may
someday prove more IPSec friendly.
This article builds upon “IP Security and NAT: Oil and Water?”[1] and
“Realm-Specific IP for VPNs and Beyond”[2], works previously pub-
lished by ISP-Planet.
A 10.0.0.0 … 10.255.255.255
B 172.16.0.0 … 172.16.255.255
C 192.168.0.0 … 192.168.255.255
These addresses were allocated for use by private networks that either
do not require external access or require limited access to outside ser-
vices. Enterprises can freely use these addresses to avoid obtaining
registered public addresses. But, because private addresses can be used
by many, individually within their own realm, they are nonroutable over
a common infrastructure. When communication between a privately ad-
dressed host and a public network (like the Internet) is needed, address
translation is required. This is where NAT comes in.
NAT routers (or NATificators) sit on the border between private and
public networks, converting private addresses in each IP packet into le-
gally registered public ones. They also provide transparent packet
forwarding between addressing realms. The packet sender and receiver
(should) remain unaware that NAT is taking place. Today, NAT is com-
monly supported by WAN access routers and firewalls—devices situated
at the network edge.
Internet
192.168.0.1 206.245.160.1
Private Public
Figure 3: NAPT
In some cases, static NAT, dynamic NAT, NAPT, and even bidirec-
tional NAT or NAPT may be used together. For example, an enterprise
may locate public Web servers outside of the firewall, on a DMZ, while
placing a mail server and clients on the private inside network, behind a
NAT-ing firewall. Furthermore, suppose there are applications within
the private network that periodically connect to the Internet for long pe-
riods of time. In this case:
• Web servers can be reached from the Internet without NAT, because
they live in public address space.
• Simple Mail Transfer Protocol (SMTP) sent to the private mail server
from the Internet requires incoming translation. Because this server
must be continuously accessible through a public address associated
with its Domain Name System (DNS) entry, the mail server requires
static mapping (either a limited-purpose virtual server table or static
NAT).
• For most clients, public address sharing is usually practical through
dynamically acquired addresses (either dynamic NAT with a cor-
rectly sized address pool, or NAPT).
• Applications that hold onto dynamically acquired addresses for long
periods could exhaust a dynamic NAT address pool and block ac-
cess by other clients. To prevent this, long-running applications may
use NAPT because it enables higher concurrency (thousands of port
mappings per IP address).
NAT-Sensitive Protocols
Our need to conserve IPv4 addresses has prompted many to overlook
the inherent limitations of NAT, recognized in RFC 1631 but deemed
acceptable for a short-term solution.
192.168.0.1 206.245.160.1
IPSec Sender Private Public IPSec Receiver
192.168.0.2 207.29.194.84
MAC MAC
f(x) = y f(x) < > y
Discarded
due to
Authentication Failure
192.168.0.1 206.245.160.1
IPSec Sender Private Public IPSsec Receiver
192.168.0.2 207.29.194.84
MAC MAC
f(x) = y f(x) = y
TCP Checksum TCP Checksum
f(p) = a f(p) < > a
Discarded
due to
TCP Checksum Failure
If we stick to ESP in tunnel mode or turn off checksums, there’s still an-
other obstacle: the Internet Key Exchange (IKE)[7]. IPSec-based Virtual
Private Networks (VPNs) use IKE to automate security association
setup and authenticate endpoints. The most basic and common method
of authentication in use today is preshared key. Unfortunately, this
method depends upon the source IP address of the packet. If NAT is in-
serted between endpoints, the outer source IP address will be translated
into the address of the NAT router, and no longer identify the originat-
ing security gateway. To avoid this problem, it is possible to use another
IKE “main mode” and “quick mode” identifier (for example, user ID or
fully qualified domain name).
A further problem may occur after a Security Association (SA) has been
up for awhile. When the SA expires, one security gateway will send a re-
key request to the other. If the SA was initiated from the well-known
IKE port UDP/500, that port is used as the destination for the rekey re-
quest. If more than one security gateway lies behind a NAPT router,
how can the incoming rekey be directed to the right private IP address?
Rekeys can be made to work by “floating” the IKE port so that each
gateway is addressable through a unique port number, allowing incom-
ing requests to be demultiplexed by the NAPT router.
192.168.0.1 206.245.160.1
Src: 207.28.194.84 : 500 IPSsec SG1
Private Public
DST: 206.245.160.1 : 61002 207.28.194.84
IKE Rekey for SG3
IPSec SG3
192.168.0.3 NAPT Table
Inside IP : Port Outside Port
192.168.0.2 : 500 61001
192.168.0.3 : 500 61002
Figure 7: Combining
IPSec and NAT
IPSec
IPSec
IPSec
What Is RSIP?
RSIP[16] leases public IP addresses and ports to RSIP hosts located in pri-
vate addressing realms. Unlike NAT, RSIP does not operate in stealth
mode and does not translate addresses on the fly. Instead, RSIP allows
hosts to directly participate concurrently in several addressing realms.
Although RSIP does require host awareness, it avoids violating the end-
to-end nature of the Internet. With RSIP, IP payload flows from source
to destination without modifications that cripple IPSec AH and many
other NAT-sensitive protocols.
RSIP gateways are multihomed devices that straddle two or more ad-
dressing realms, just as NAT-capable firewalls and routers do today.
When an RSIP-savvy host wants to communicate beyond its own pri-
vate network, it registers with an RSIP gateway. The RSIP gateway
allocates a unique public IP address (or a shared public IP address and a
unique set of TCP/UDP ports) and binds the private address of the RSIP
host to this public address. The RSIP host uses this public source ad-
dress to send packets to public destinations until its lease expires or is
renewed.
But the RSIP host cannot send a publicly addressed packet as-is; it must
first get the packet to the RSIP gateway. To do this, the host wraps the
original packet inside a privately addressed outer packet. This “encapsu-
lation” can be accomplished using any standard tunneling protocol: IP-
in-IP, the Generic Routing Encapsulation (GRE), or the Layer 2 Tunnel-
ing Protocol (L2TP). Upon receipt, the RSIP gateway strips off the outer
packet and forwards the original packet across the public network, to-
ward its final destination.
RSIP Gateway
For simplicity, we talk about RSIP linking one private network to the
public Internet, but RSIP can also be used to relay traffic between sev-
eral privately addressed networks. An RSIP host can lease several
different addresses as needed to reach different destinations networks.
We’ve also focused on outgoing traffic, but an RSIP host can ask the
RSIP gateway to “listen” and relay incoming packets addressed to a
public IP and port.
A similar problem occurs during association setup with the IKE. IKE
packets usually carry the well-known source port UDP/500. Using dif-
ferent source ports is the preferred solution, but if several RSIP hosts use
the same RSIP gateway to relay IKE from port UDP/500, another dis-
criminator is needed. Again, there is a convenient answer: every IKE
packet carries the initiator cookie supplied in the first packet of an IKE
session. The RSIP gateway can route IKE responses to the correct RSIP
host using the tuple (initiator cookie, destination port [IKE], destination
IP address). But rekeys may still be an issue.
Conclusion
Although NAT can be combined with IPSec and other NAT-sensitive
protocols in certain scenarios, NAT tampers with end-to-end message
integrity. RSIP—or whatever RSIP evolves into—may someday prove to
be a better address-sharing solution for protocols that are adversely im-
pacted by NAT. If RSIP fails to mature, another solution may be
developed to broaden use of NAT with IPSec. Alternatives now under
discussion within the IETF include UDP encapsulation and changes to
IKE itself[14][15].
W
e often forget that computer networks are put in place to
support human networks—person-to-person exchanges of
information, knowledge, ideas, opinions, insights, and ad-
vice. This article looks at a technology that was developed to map and
measure human networks—social network analysis—and applies some
of its principles and algorithms to designing computer networks. And as
we see more peer-to-peer (P2P) models of computer-based networks, the
P2P metrics in human network analysis become even more applicable.
Activity
Figure 1 shows a simple social network. A link between a pair of nodes
depicts a bidirectional information flow or knowledge exchange be-
tween two individuals. Social network researchers measure network
activity for a node by using the concept of degrees—the number of di-
rect connections a node has.
In this human network, Diane has the most direct connections in the
network, making hers the most active node in the network with the
highest degree count. Common wisdom in personal networks is “the
more connections, the better.” This is not always so. What really mat-
ters is where those connections lead to—and how they connect the
otherwise unconnected![5] Here Diane has connections only to others in
her immediate cluster—her clique. She connects only those who are al-
ready connected to each other—does she have too many redundant
links?
Figure 1: Human
Network
Carol
Andre Fernando
Beverly Garth
Ed
Betweenness
While Diane has many direct ties, Heather has few direct connections—
fewer than the average in the network. Yet, in may ways, she has one of
the best locations in the network—she is a boundary spanner and plays
the role of broker. She is between two important constituencies, in a role
similar to that of a border router. The good news is that she plays a
powerful role in the network, the bad news is that she is a single point of
failure. Without her, Ike and Jane would be cut off from information
and knowledge in Diane’s cluster.
Closeness
Fernando and Garth have fewer connections than Diane, yet the pattern
of their ties allow them to access all the nodes in the network more
quickly than anyone else. They have the shortest paths to all others—
they are close to everyone else. Maximizing closeness between all rout-
ers improves updating and minimizes hop counts. Maximizing the
closeness of only one or a few routers leads to counterproductive re-
sults, as we will examine below.
Network Centralization
Individual network centralities provide insight into the individual’s loca-
tion in the network. The relationship between the centralities of all
nodes can reveal much about the overall network structure. A very cen-
tralized network is dominated by one or a few very central nodes. If
these nodes are removed or damaged, the network quickly fragments
into unconnected subnetworks. Highly central nodes can become criti-
cal points of failure. A network with a low centralization score is not
dominated by one or a few nodes—such a network has no single points
of failure. It is resilient in the face of many local failures. Many nodes or
links can fail while allowing the remaining nodes to still reach each
other over new paths.
Our social network algorithms can assist in measuring and meeting all
three goals.
• Reducing the hop count infers minimizing the average path length
throughout the network—maximize the closeness of all nodes to
each other.
• Reducing the available paths leads to minimizing the number of geo-
desics throughout the network.
• Increasing the number of failures a network can withstand focuses
on minimizing the centralization of the whole network.
Star Topology
The Star topology, shown in Figure 2, has many advantages—but one
glaring fault. The advantages include ease of management and configu-
ration for the network administrators. For the Star, the three competing
goals delineate as follows:
• Reducing hop count: The short average path length (1.75) through-
out the network meets this goal well. Any router can reach any other
router in two steps or less.
• Reducing available paths: The fact that there are a minimum num-
ber of possible available paths (56) to reach all other nodes—will not
overload the routing tables, nor cause delays during routing table up-
dates. It takes only seven bidirectional links to create the available
paths.
Figure 2: Routers in
Star Topology Router A
Router H Router B
Network Measures
14 paths of length 1
Router G Router C 42 paths of length 2
56 geodesics in network
Physical Links 7
Average Path Length 1.750
Longest Path 2 hops
Network Centralization 1.000 (maximum)
Router F Router D
Router E
Ring Topology
The Ring topology, shown in Figure 3, is an improvement over the Star.
It has some of the same advantages, but does not eliminate all of the
drawbacks of the Star. The advantages include ease of management and
configuration for the network administrators—adding another router is
very simple. Unlike the Star topology, the Ring provides some redun-
dancy and, therefore, eliminates the single point of failure—all nodes
have an alternate path through which they can be reached. Yet it is still
vulnerable to both link and router failures. For the Ring, the three com-
peting goals delineate as follows:
• Reducing hop count: The average path length of 2.5 is quite long for
a small network of eight nodes. Some routers (that is, A and E) re-
quire four steps to reach each other! Many ring physical layers hide
this complexity from the IP layers in order to make those hops invisi-
ble to routing protocols.
Figure 3: Routers in
Ring Topology
16 paths of length 1
Router H Router B
16 paths of length 2
16 paths of length 3
16 paths of length 4
64 geodesics in network
Router G Router C
Physical Links 8
Average Path Length 2.500
Longest Path 4 hops
Network Centralization 0.000 (minimum)
Router F Router D
Router E
The disadvantages of the Full Mesh topology all focus on one glaring
fault—there are too many physical links. If the routers are far apart, the
link costs can quickly become prohibitively expensive because adding
routers creates a geometrical explosion in links required—soon the rout-
ers do not have enough ports to support this topology. Administering
the system and keeping an up-to-date topology map becomes more and
more complex as routers are added. The network in Figure 4 has 28
two-way links. Double the routers, in a full mesh topology, and the link
count increases by a factor greater than 4.
Physical Links 28
Average Path Length 1.000
Longest Path 1 hop
Router G Router C Network Centralization 0.000 (minimum)
Router F Router D
Router E
Figure 5: Routers in
Partial Mesh Topology
Network Measures
Router A
24 paths of length 1
Router H Router B
48 paths of length 2
72 geodesics in network
Physical Links 12
Router G Router C Average Path Length 1.667
Longest Path 2 hops
Network Centralization 0.000 (minimum)
Router F Router D
Router E
NSFnet Backbone
The NSFnet Backbone network, shown in Figure 6, connected the su-
percomputing centers in the USA in 1989. It is a partial mesh design that
functions as a real-life example to test our social network algorithms.
Figure 6: NSFnet in
1989
NWnet
MERITnet
NYSERnet
WESTnet PSCnet
JVNCnet
BARRnet USAN
MIDnet
NCSAnet
SURAnet
SDSCnet
SESQUInet
Average
Number of Longest
Network Path
Scenario Geodesics in Path
Centralization Length
the Network (hops)
(hops)
The most damaging was link failure 4—the link failure between NCSA
and PSC. This link is between two of the most central nodes in the net-
work. If the flows between nodes are distributed somewhat evenly, then
this link is one of the most traveled in the network.
The least damaging is node failure 3—the node failure at JVNC. In fact,
this failure improved most metrics! By removing this node from the net-
work, the number of network paths drops significantly, network
centralization decreases, path length decreases slightly, and the longest
path is still four hops.
The original NSFnet topology design is very efficient. I tried two differ-
ent strategies to improve the network. The first strategy involved moving
existing links to connect different pairs of routers. No obviously better
topology was found by rearranging links among the routers. I was not
able to find a better design that reduced both the number of geodesics
and the average path length without significantly increasing the number
of physical links in the network.
Because the NSFnet nodes had a maximum limit of three direct neigh-
bors, I started connecting the nodes of Degree = 2. Options 1 through 3
show the various combinations and their effect on the total network.
The improvements are minimal, yet each option offers specific strengths.
Average
Number of Longest
Network Path
Scenario Geodesics in Path
Centralization Length
the Network (hops)
(hops)
Conclusion
In the real world we may not have the flexibility to experiment with our
network model as we have with these examples. There will be more
constraints. The information flows in your organization may require
that specific pairs of routers have direct links—even if those connections
would not be recommended by the algorithms we have been examin-
ing. Yet, when we have our “must-have” connections in place, we can
experiment with the placement of the remaining connections using these
social network metrics to indicate when we are getting close to a robust,
yet efficient topology.
References
[1] Krebs V., “Visualizing Human Networks,” Release 1.0, Esther Dyson’s
Monthly Report, February 1996.
[2] Watts D., Strogatz S., “Collective Dynamics of Small World Networks,”
Nature, 4 June 1998.
[6] Retana, A., Slice, D., White, R., Advanced IP Network Design, ISBN
1578700973, Cisco Press, 1999.
[7] Hagen G., Discussions with fellow network researcher, Guy Hagen,
regarding combinatorial algorithms and models for recommending
changes to improve the overall topology of a network.
A
famous philosopher, Yogi Berra, once said, “Prediction is hard.
Especially the future.”[1] In spite of this sage advice, we will still
make an attempt at identifying the frontiers for research net-
works. By first examining and then extrapolating from the evolution
and history of past research networks, we may be able to get an idea
about the frontiers that face research networks in the future. One of the
initial roles of the research network was to act as a testbed for network
research on basic network protocols, mostly focusing on network Lay-
ers 1 through 4 (that is, the physical, data link, network, transport, and
network management layers), but also including basic applications such
as file transport and e-mail. During the early phases of the Internet, the
commercial sector could not provide the network infrastructure sought
by the research and education communities. Consequently, research net-
works evolved and provided backbone and regional network
infrastructures that provided production-quality access to important re-
search and education resources such as supercomputer centers and
collaboratories[2]. Recent developments show that most research net-
works have moved away from being testbeds for network research and
have evolved into production networks serving their research and educa-
tion communities. It’s time to make the next real evolutionary step with
respect to research networks, and that is to shift our research focus to-
ward maximizing the most critical of resources—people.
In this article, the term “network research” means long-term basic re-
search on network protocols and technologies. The many types of
network research can be categorized into three classes. The first cate-
gory covers research on network transport infrastructure and generally
includes research on the Open System Interconnection (OSI) Model
Layers 1 through 4 (that is, the physical, data link, network, and trans-
port layers) as well as research issues relating to the interconnection and
peering of these layers and protocols. We will refer to this class of re-
search as “transport services.”
The second class consists of research covering what can nominally be re-
ferred to as “middleware”[6]. Middleware basically includes many of the
services that were originally identified as network Layers 4 through 6.
Layer 4 is included because of the need for interfaces to the network
layer (sockets, TCP, and so on).
The third area covers research on the real applications (for example, e-
commerce, education, health care, and so on), network interfaces, net-
work applications (for example, e-mail, Web, file transfer, and so on),
and the use of networks and middleware in a distributed heterogeneous
environment. Applications depend on both the middleware and trans-
port layers. Advanced applications include Electronic Persistence
Presence (EPP) and UC. EPP, or e-presence, describes a state of a person
or application as always being “on the network” in some form or an-
other. The concept of session-based network access will no longer apply.
EPP assumes that support for UC and both mobile and nomadic net-
working exists. UC refers to the pervasive presence of computing and
networking capabilities throughout all of our environments; that is, in
automobiles, homes, and even on our bodies.
Background
During the early phases of the evolution of research networks and the In-
ternet, national research networks were building and managing
backbone networks because there was a technical reason to do so. Gov-
ernments supported these activities, because at the time the commercial
sector Internet Service Providers (ISPs) could not do it and the expertise
to do so resided within the R&E community. Much of the research or
testing of this time still focused on backbone technologies as well as ag-
gregation networks and architectures. Research networks started out by
supporting longer-term risky network research and quickly evolved to
support shorter-term no-risk production infrastructure.
At the end of the 1980s, the Internet and its associated set of protocols
rapidly gained speed in deployment and use among the research commu-
nity. This started the major shift away from research networks
supporting experimental network protocols toward RNs supporting ap-
plications via production research networks; for example, the mission
agencies’ (that is, those agencies whose mission was fairly well focused in
a few scientific areas) networks at the Department of Energy (DoE) (ES-
net[12]) and NASA (NSInet). At the same time, the NSFNET was still
somewhat experimental with the introduction and use of “home-grown”
T1 and T3 routers, as well as with pioneering research on peering and
aggregation issues associated with the hierarchical NSFNET backbone.
It also focused on issues relating to the interconnection of the major
agency networks and international networks at the Federal Internet Ex-
changes (FIXes), as well as the policy landscape of interconnecting
commercial e-mail (MCIMail) with the Internet. The primary policy
justification for supporting these networks (for example ESnet, NSInet,
NSFNET) in the late 1980s was to provide access to scarce resources,
such as supercomputer centers, although the NSFNET still supported
network research, albeit on peering and aggregation.
At this time, there were still no commercial service providers from which
to procure IP services to connect the numerous and varied sites of the
NSFNET and other research networks. Hence there were still valid tech-
nical reasons for NRNs and R&E networks to exist and provide
backbone services.
The FNC wisely left the management of the Internet protocols to the
IAB, the Internet Engineering Task Force (IETF), and the Internet Engi-
neering Steering Group (IESG); however, the FNC did not completely
relinquish its responsibility, as evidenced by its prominent role in prod-
ding the development of Classless Interdomain Routing (CIDR) and
originating the work that led to new network protocols (for example,
IPv6).
The initial phase was to expand to the vBNS and connect hundreds of
research universities. The vBNS again changed from a research net-
work, connecting a few sites and focusing on network and Metacenter
research, back into a production research network. The vBNS is soon
eclipsed by the OC-48 Abilene network. Gigapops, which are localized
evolutions of NAPs, are used to connect the top R&E institutions to the
Internet 2 backbones (that is, vBNS and Abilene).
Future Frontiers
UC and EPP are the paradigm shifts at the user level that are already
drastically altering our concept and understanding of networks. The
scale, number, and complexity of networks supporting these new appli-
cations will far exceed anything we have experienced or managed in the
past. Users will “be on the net” all the time, either as themselves or indi-
rectly through agents and “bots.” They will be mobile and nomadic.
There will be “n” multiple instances of a user active on a network at the
same time, and not necessarily from the same logical or geographical lo-
cation. The frontiers associated with this new focus are many times
more complex from a systems integration level than any work we have
done in the past with backbone networks. This new frontier will pro-
vide new technical challenges at the periphery of the network; that is,
the intelligent access and campus networks necessary to support these
new environments. EPP and UC will drastically affect our research net-
works and application environments, much as the Web and its protocols
drastically changed Internet and traffic patters in the 1990s.
The frontiers faced by research networks of the future will depend upon
many technical and sociopolitical factors on a variety of levels. The so-
ciopolitical frontiers can be divided into two different classes, one for e-
developed nations who have already gone through the learning process
Summary
“Being on the net” will change our way of doing e-everything, and the
evolution of the underlying infrastructure will need to change in order to
support this paradigm shift. The intelligence of the network will not
only move to the periphery, but even beyond, to the personal digital as-
sistant and body area network. Therefore, it is important that the goals
and focus of the research networks also evolve. Leave the R&D associ-
ated with backbone networks mainly with the commercial sector
because this is their raison d’etre. The research networks of the future
will be mostly VPNs, with a few exceptions, as noted earlier in this arti-
cle. Research networks need to focus on the new technologies at the
periphery as well as the middleware necessary to support the advanced
environments that will soon be commonplace. Many research networks
will themselves become virtual, for example, HEPnet, providing exper-
tise but not necessarily a network service.
Policy makers must adapt to address not only these substantial techni-
cal and architectural changes but also second-order policy issues such as
security and privacy and how to ensure that we don’t end up with a bi-
furcated digital economy of e-savvy and e-challenged communities.
Disclaimer
The ideas, comments, and projections proffered in this article are the
sole opinions of the author, and in no way represent or reflect official or
unofficial positions or opinions on the part of Cisco Systems, Inc. This
article is based on my experience designing and managing operational
international research networks, as well as being a program manager for
network research, during the formative years of the Internet (that is, my
tenure as a program manager for the United States Government’s Na-
tional Science Foundation and the Department of Energy), and my
recent experience within Cisco working with next-generation Internet
projects and managing its University Research Program. Many of the
examples that I cite in this work are based on the development and de-
ployment of the U.S.-based Internet and research networks, although the
lessons learned in the United States may also be illuminating elsewhere.
Gratitude
I would like to thank my friend and colleague, Dr. Stephen Wolff, of the
Office of the CTO, Cisco Systems Inc., for many good suggestions with
respect to improving the content and presentation of this article; but,
mostly for his good-humored authentication of my history and facts.
References
[0] This article was presented at the third Global Research Village
Conference organized jointly by the Organization for Economic
Cooperation and Development (OECD) and the Netherlands in
Amsterdam, December 6–8, 2000.
[3] http://www.nsf.gov/
[4] http://www.darpa.mil/
[5] http://www.gigaport.nl/
[8] http://www.nordu.net/
[9] http://www.canarie.ca/
[10] http://www.internet2.org/
[12] http://www.es.net/
[14] http://www.globus.org/
[15] http://www.cs.virginia.edu/~legion/
[16] http://www.cs.wisc.edu/condor/
[17] http://www.science.uva.nl/projects/polder/
[18] http://www.hep.net/hepnrc.html
[19] http://www.cs.utah.edu/flux/testbed/
Network security and the ability to detect intrusion attempts has be-
come extremely important in today’s networks, regardless of size. I
was looking for a book that would get technical on the details in
these matters. Laura Chappell, the guru of packet-level information
(www.packet-level.com), recommended this book to me. I should
have realized what I was getting into at that point. I purchased the
book, which was a bit expensive for its size at $39.99, and eagerly be-
gan reading it.
Mr. Northcutt starts out with a good discussion on how Kevin Mitnick
conducted his famous attack. The book presents some very good infor-
mation on a variety of topics, intermixed with personal observations
and opinion. This made for an enjoyable read. If you are considering
getting an Intrusion Detection System (IDS), then this book will provide
you with some valuable insight and guidelines to consider from a recog-
nized industry expert in this field. Mr. Northcutt is affiliated with The
System Administration, Networking, and Security (SANS) Institute
(www.sans.org).
Be aware that this book is not for the faint of heart. You will dive into
the depths of packets and intrusion detection rather quickly, and never
look back. This is both good and bad. I prefer an easy-to-read technical
book, but the level of technical knowledge required to make sense of
many of the examples is rather extensive. This includes how the many
trace examples are presented in rather specialized fashion; in addition,
the touted “detailed” explanations varied in usefulness quite a bit.
The book was marketed as a training aid; however, I suspect most read-
ers need to be quite experienced to benefit from it. I admit I had to read
many sections more than once in order to grasp the finer points being
conveyed. I am confident that many readers have already echoed this
sentiment to the author and publisher, since the second edition of this
book was published in September 2000 and the page count has dou-
bled, with only a modest price increase. I put it on my Christmas list!
The ICANN staff will now work through the end of the year to negoti-
ate registry agreements with the applicants selected. The proposed
schedule for completion of negotiations is December 31, 2000. The ne-
gotiated registry agreements must then be approved by the board of
directors. Following that approval, the ICANN board will forward its
recommendations to the U.S. Department of Commerce for implementa-
tion. For more on the history of ICANN’s new TLD application process,
please see http://www.icann.org/tlds/ Multimedia archives of the
annual meeting can be reviewed at http://cyber.law.harvard.edu/
icann/la2000/
This publication is distributed on an “as-is” basis, without warranty of any kind either express or
implied, including but not limited to the implied warranties of merchantability, fitness for a particular
purpose, or non-infringement. This publication could contain technical inaccuracies or typographical
errors. Later issues may modify or update information provided in this issue. Neither the publisher nor
any contributor shall have any liability to any person for any loss or damage caused directly or
indirectly by the information contained herein.