0% found this document useful (0 votes)
68 views4 pages

Super Computers

The document summarizes the history and development of supercomputers from early machines like the CDC-6600 in the 1960s to modern exascale systems. It discusses how early supercomputers introduced parallel processing and vector processing to improve performance. Major manufacturers that developed fast supercomputers included Cray, IBM, Fujitsu, and NEC. The TOP500 list began ranking the most powerful systems in 1993 and is now dominated by petaflop and exaflop machines from IBM, Cray, HP, and other vendors. The future aims to develop the first practical exascale supercomputers.

Uploaded by

iboy6iboy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views4 pages

Super Computers

The document summarizes the history and development of supercomputers from early machines like the CDC-6600 in the 1960s to modern exascale systems. It discusses how early supercomputers introduced parallel processing and vector processing to improve performance. Major manufacturers that developed fast supercomputers included Cray, IBM, Fujitsu, and NEC. The TOP500 list began ranking the most powerful systems in 1993 and is now dominated by petaflop and exaflop machines from IBM, Cray, HP, and other vendors. The future aims to develop the first practical exascale supercomputers.

Uploaded by

iboy6iboy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Supercomputers:

Past, Present, and the Future:


: by Sumit Narayan

“At each increase of knowledge, as well as on the contrivance


of every new tool, human labor becomes abridged.”
—Charles Babbage

C
omputers have become an integral part of everyday life of everyone, more so than Charles
Babbage could have foreseen. Computers are more popular and much faster today than they
were several years ago. There has been constant improvement in the speed and complexity of
microprocessors—the heart of every computing machine. In 1965, Gordon Moore, the co-founder
of Intel, predicted that the complexity of integrated circuits would approximately double every year.
From single core to multi-core processors, the complexity of these microchips is directly linked to
their processing speed, and they have been improving exponentially over time, strictly following
Moore’s Law. Introduced in 1954, the IBM 704 was the first mass-produced computer with float-
ing-point arithmetic hardware. It was capable of executing 40,000 instructions per second. Today’s
desktop or laptop computers can perform tens of millions of instructions per second. We have come
a long way, driven by dreams of faster calculations, and supercomputers, computers that form the
forefront of all computing machines, were designed to fulfill those dreams. From the mysterious
Antikythera mechanism calculators to modern-age petaflop supercomputers, we have constantly
been searching for ways to make computations faster. Today’s supercomputers can do thousands of
trillions of floating-point computations per second and have been constantly improving. Applica-
tions such as weather prediction or nuclear reaction simulations are comprised of gazillions of
operations and may take several days to complete. IBM, Cray, Hitachi, SGI, Fujitsu, and many others
have invested millions of dollars and countless hours of work to develop systems to solve these com-
plex problems. In the past, these systems were available only to government-funded national and
international research centers; but recent advancement in technology and competition in the tech-
nology industry has made these machines cheaper and more valuable for IT organizations.
Past: Early Supercomputers size, allowing it to be operated at a higher clock rate. This novel idea,
CDC-6600, a mainframe computer produced by Control Data Cor- which later came to be known as reduced instruction set computer
poration (CDC) in 1965 is regarded as the first successful supercom- (RISC), allowed the CPU, peripheral processors (PPs), and I/O units
puter. Designed by Seymour Cray and Jim Thornton, the machine to operate in parallel, thus improving the overall speed and perform-
was capable of operating at 9 megaflops (MFLOPS)—thousands of ance of the machine.
times slower than our current desktops. CDC-6600 was also the first Cray’s engineering continued at CDC, resulting in CDC-7600, a
machine to introduce separate processors for the handling of house- successor to and ten times faster than CDC-6600. Jim Thornton, on
keeping tasks such as memory access and input/output (I/O). Until the other hand, became part of a new project, STAR-100, which was
then, the central processing unit (CPU) was in charge of performing designed to operate at 100 MFLOPS. CDC’s STAR-100 was released
all operations: computing, memory, and input/output. I/O in those in 1974 and was one of the first machines to use a vector processor
days was usually done using punch cards or standard magnetic tapes for improving math performance. Vector processing design allowed
and was extremely slow. By providing separate processors for each CPUs to perform mathematical operations on multiple data elements
event, the CPU was responsible only for computations, thus leaving simultaneously. The CPU had to decode only a single instruction, set
the CPU with fewer instructions. It also resulted in smaller processor up the hardware and start feeding the data. This technique remained

Crossroads www.acm.org/crossroads Summer 2009/ Vol. 15, No. 4 7


Sumit Narayan

very popular in the scientific community and formed the basis of RISC). In a distributed memory system, the memory and address
design for several supercomputers in the 1980s and 1990s. But, space of each processor in a multi-processor system is local to itself.
STAR-100, although designed to perform at 100 MFLOPS, gave The data can only be shared between processors using a message pass-
lower than expected numbers in a “real-world” environment because ing interface like IBM’s message passing library (MPL). IBM continued
the serialized part of the processing was still slow. Switching from releasing several successors to IBM-SP, and it faced stiff competition
vectors to normal data was still time-consuming, making the real- from other players in the market, such as Hitachi and Intel. At the turn
world performance slower than expected. This conversion theory of the century, IBM was at the top of the fastest supercomputer list
was developed by Gene Amdahl in early 1967. However, Amdahl’s with IBM ASCI White. It had 8,192 processors, 6 TB of memory,
Law was ignored by architects of the STAR-100. 160 TB of storage space, and operated at 7.226 TFLOPS.
In 1971, Seymour Cray, unable to secure sufficient funds for his
project at CDC, left the company to form Cray Research, where he Present: Supercomputers Today
designed Cray-1 (160 MFLOPS). The Cray-1 provided a good balance In 1993, based on ideas of Hans Meuer, a professor of Computer
between scalar and vector performance and also used registers to Science at the University of Mannheim, Germany, project TOP500
dramatically improve performance. Registers are small amounts of was initiated. The aim of this project was to list the 500 most power-
memory storage available on processors. Their contents can be ful computer systems in the world. The list, compiled biannually,
accessed at much faster speeds compared to external I/O compo- ranks supercomputers based on their performance on the LINPACK
nents. However, because they reside on the processor’s chip, they are benchmark—a linear algebra library for digital computers that tests
more expensive to manufacture. They also provide less flexibility in the floating point computing power of the system. Table 1 shows the
terms of size, so Cray’s machine could only read small parts of data at list of fastest supercomputers by period since 1993.
a time. The first release of Cray-1 was in 1976, and it dismissed After IBM’s ASCI White, Earth Simulator developed by NEC in
STAR-100 from its top spot as the fastest supercomputer of that time. Japan topped the list from 2002 to 2004. It was developed to under-
The first official customer, the National Center for Atmospheric stand global climate models, and it was capable of operating at over
Research (NCAR), paid $8.86 million to own the supercomputer. This 35 TFLOPS. IBM returned with BlueGene to reposition itself as the
machine shaped the computer industry for years to come. Cray-1 was leader in building the fastest supercomputer. Several prototypes of
also Cray’s first supercomputer to use integrated circuits (ICs). BlueGene were announced: BlueGene/L (released March 2005),
Cray-1 was succeeded in 1982 by Cray X-MP (800 MFLOPS), the BlueGene/C (in development), BlueGene/P (released June 2007) and
first multiprocessing computer, and in 1985 by Cray-2, the first machine BlueGene/Q (due 2011). BlueGene remained the fastest supercom-
to break the gigaflop barrier at 1.9 GFLOPS. Cray-2 used all IC compo- puter until 2008, when it was replaced by RoadRunner, also designed
nents instead of individual components and remained the fastest by IBM. Other powerful supercomputers released during this period
machine until 1987, when ETA Systems, a spin-off from CDC, designed include Cray’s XT-3 Red Storm, Cray’s XT-4 Franklin, Cray’s XT-5
a 10 GFLOPS machine called ETA-10. ETA-10 used fiber optics for Jaguar, Dell’s Thunderbird, SGI’s Columbia, and HP’s Cluster Platform.
communication between processors and I/O devices. ETA later merged According to the list released in November 2008, the top three
back with CDC in 1989. In the meantime, two new companies, Think- supercomputers and their average performance are:
ing Machines Corporation (1982) and nCUBE (1983), were founded.
1. IBM’s RoadRunner at Los Alamos National Laboratory, USA: 1.105
Both companies specialized in parallel computing architectures.
PFLOPS.
Thinking Machines, started by graduates from the Massachusetts
Institute of Technology, produced several supercomputers released as 2. Cray’s Jaguar XT5 at Oak Ridge National Laboratory, USA: 1.059
Connection Machines. By 1993, four of the top five fastest supercom- PFLOPS.
puters belonged to Thinking Machines. nCUBE, on the other hand, was
3. SGI’s Pleiades Altix ICE 8200EX at NASA/Ames Research Center,
started by a group of Intel employees who wanted Intel to enter into
USA: 487.01 TFLOPS.
parallel computing but couldn’t convince the decision-
makers to undertake the endeavor. nCUBE released a
parallel computer with the same name. In the mid-1990s, Period Supercomputer Name Maker
the supercomputer market collapsed, and both compa- 06/1993–11/1993 CM-5 (Connection Machine) Thinking Machine Corp.
nies were acquired by bigger players in the business. The 11/1993–06/1994 Numerical Wind Tunnel Fujitsu
crash also forced Cray Research to merge with Silicon 06/1994–11/1994 Paragon XP/S Intel
Graphics, Inc. (SGI) in 1996. 11/1994–06/1996 Numerical Wind Tunnel Fujitsu
One of the major companies that has yet to be men- 06/1996–11/1996 SR 2201 Hitachi
tioned is IBM. Although IBM had built several of the 11/1996–06/1997 CP-PACS Hitachi
fastest computers in the world (for example the IBM 06/1997–11/2000 ASCI Red Intel
7030), it was not until 1993 that it entered the super- 11/2000–06/2002 ASCI White IBM
computer market with IBM SP-1. It was the first mem- 06/2002–11/2004 Earth Simulator NEC
ber of the IBM’s Scalable POWERparallel distributed 11/2004–06/2008 BlueGene IBM
memory parallel computer, based on RISC System/6000 06/2008–06/2009 RoadRunner IBM
processing element, which later became known as
POWER (Performance Optimization With Enhanced Table 1: Fastest Supercomputers (1993–2009).

8 Summer 2009/ Vol. 15, No. 4 www.acm.org/crossroads Crossroads


Supercomputers: Past, Present, and the Future

IBM’s RoadRunner be configured with 4–32 GB DDR2 memory. XT-5 blades are intercon-
IBM’s RoadRunner is a hybrid system. It uses two different processor nected using Cray’s SeaStar2+ chips, which provide a very high bi-direc-
architectures—dual-core AMD Opteron server processor, based on tional link speed of 9.6 GB/s. The system installed at Oak Ridge National
AMD64 architecture, and IBM’s Cell processor, based on POWER Laboratory in the United States is a combination of both XT-4 and XT-
architecture. RoadRunner was built by IBM at Los Alamos National 5 machines. In total, the system peaks at 1.6 PFLOPS, consists of 45,376
Laboratory in the United States. It sports 6,562 Opteron processors, tak- Opteron processors, has 362 TB of memory and 10 PB of storage space.
ing care of standard processing, such as file system I/Os as well as 12,240
PowerXCell 8i processors, handling CPU-intensive tasks, such as math- Silicon Graphics’ Altix
ematical calculations. The system boasts 98 TB of memory and 2 PB of Altix is different from the above two supercomputers in that it is based
external storage. The machine had a peak performance of 1.7 PFLOPS. on Intel processors and is comprised of distributed shared memory ma-
This design is significantly different from BlueGene systems, which chines. The system is installed at the NASA/Ames Research Cen-
were based on PowerPC processors. The idea behind BlueGene was to ter/NAS and is nicknamed Pleiades. It consists of 12,800 Intel Xeon
trade the speed of processors for lower power consumption. Thus, processors with 51 TB of RAM and over 1 PB of storage. The system
BlueGene systems had a notably higher amount of processors compared peaks at 608 TFLOPS. The system supplements Columbia, which, with
to other supercomputers giving the same performance. 14,336 cores and 51 TFLOPS, ranked second in 2004, just behind IBM’s
BlueGene/L. The nodes in Altix are connected using NUMAlink4, de-
Cray’s Jaguar XT-5 veloped by SGI, capable of providing bandwidth of up to 6.4 GB/s.
Cray’s Jaguar XT-5, which was ranked second in November 2008, is
an updated version of Cray’s XT-4 supercomputer. It is based on Future: The Next Supercomputers
AMD’s Opteron quad-core processor. Each Cray XT-5 blade includes Normal consumers do not require a supercomputer for their regular
four compute nodes for high scalability, and each compute node cancomputing use. Supercomputers are primarily a necessity of scien-
tists performing mass computing at ultra-high
speed. They are used in all plausible domains:
Year Accomplishments space exploration, nuclear energy, climate pre-
1962 • Control Data Corporation opened lab; headed by Seymour Cray. diction, environmental simulations, gene
1965 • CDC 6600. technology, math, physics, and many others.
1969 • UNIX operating system developed by group of AT&T employees at Bell Labs. While supercomputers excel at highly com-
1972 • Seymour Cray started Cray Research Inc. putationally intensive tasks, they are not the
1976 • Delivery of Cray-1. fastest computers on the planet. The human
1982 • Delivery of Cray X-MP with two processors. brain controls thousands of human muscles,
1985 • Delivery of Cray 2 with four processors; peak performance of 2 GFLOPS. does audio and visual processing at ex-
• Delivery of CM-1 by Thinking Machines Corp. tremely high speeds, and controls thou-
• Delivery of iPSC/1 by Intel. sands of nerves. It does of these tasks in a frac-
• Delivery of Convex C1. tion of a second and is regarded as the
1987 • Delivery of ETA-10. fastest processor in the world. 10 PFLOP is
• Delivery of CM-2 by TMC. too slow to simulate the whole body, includ-
1992 • Delivery of Cray-3. ing tissue, blood flow, and movement. The
• Delivery of CM-5 by TMC. size/performance ratio of the human brain
• Production of Paragon/XP series by Intel. versus that of a supercomputer is beyond
• Apple, IBM, and Motorola formed AIM alliance to develop mass marker comparison. This is an indication that we still
for POWER processors, resulting in PowerPC. have a long way to go.
1993 • TOP500 Project. Construction of supercomputers is a
• Production of IBM SP1. very challenging and expensive task. It may
1997 • Delivery of Intel’s ASCI Red, the first TFLOP machine. take several years for a supercomputer to
1999 • Delivery of IBM’s ASCI Blue. move from the laboratory to the market, with
2000 • Delivery of IBM’s ASCI White. costs ranging $150–200 million or more.
2002 • Delivery of NEC’s Earth Simulator. Most of this work can only be done with the
2004 • Delivery of IBM’s BlueGene/L. support of government funds and govern-
• Delivery of Cray’s XT-3. ment-funded research centers. Designers of
2006 • Delivery of Cray’s XT-4. the world’s fastest supercomputers, IBM,
2007 • Cell processor released. Cray, SGI, Sun, HP, Hitachi, and many
• Delivery of IBM’s BlueGene/P. others, are putting forth the effort to create
• Delivery of Cray’s XT-5. a multi-petaflop machine. IBM is planning
2008 • Delivery of IBM’s RoadRunner. a 50 PFLOP machine by the end of 2013, and
it is estimated that within the next decade,
Table 2: Chronology of Supercomputers. we will have an exaflop machine. But the

Crossroads www.acm.org/crossroads Summer 2009/ Vol. 15, No. 4 9


Sumit Narayan

We Want You
process of building faster machines is crippled by input/output units.
I/O is not scaling as fast as Moore’s Law. Research is being conducted
on improving the design and performance of parallel file systems,
including introducing solid state drives. Other challenges include the
search for experts in computational science, mathematics, and com- to join the Crossroads staff
puter science to understand these complex systems and design soft-
ware for taking advantage of the enormous computing power that Limited Openings
these supercomputers provide. As the future unfolds, it will be inter-
esting to see what we will accomplish next.
for Associate Editors
Crossroads is looking for qualified Associate Editors
Acknowledgements to join our talented staff. AEs are involved through-
I would like to thank my advisor Dr. John A. Chandy for providing in- out the editorial process, from reviewing submissions
put and sharing his experiences and opinions. to working with the authors and copy-editing final
drafts. Extensive computing experience not required.
Biography Crossroads is a fairly low-key commitment with all
Sumit Narayan is a PhD candidate at the University of Connecticut, work conducted over the Internet. Opportunities for
Storrs. He holds a Master’s from the University of Connecticut and a advancement are available. Positions in marketing,
Bachelor’s of Engineering from University of Madras. His research in- graphics design, and web developers are also open.
terests include high-performance computing, parallel file systems, stor-
age system architectures, and I/O subsystems. Be a part of the student magazine
of the first society in computing!
Visit the NEW Site For details, please email
www.acm.org/crossroads crossroads@acm.org

ACM’s Career and Job Center!


Are you looking for your next IT job? Do you need Career Advice?
Visit ACM’s newest career resource at http://www.acm.org/careercenter

The ACM Career and Job Center offers ACM members a host
of exclusive career-enhancing benefits!:
• A highly targeted focus on job opportunities in the computing industry
• Access to hundreds of corporate job postings
• Resume posting – stay connected to the employment market and maintain
full control over your confidential information
• An advanced Job Alert system that notifies you of new opportunities matching your criteria
• Live career advice to assist you in resume development, creating cover letters, company research,
negotiating an offer and more
• Access to an extensive list of online career resources at our new site called
Online Resources for Graduating Students

The ACM Career and Job Center is the perfect place to begin searching for your
next employment opportunity! Visit today at http://www.acm.org/careercenter

www.acm.org/careercenter

10 Summer 2009/ Vol. 15, No. 4 www.acm.org/crossroads Crossroads

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy