CMPS375 Class Notes Chap 01
CMPS375 Class Notes Chap 01
CMPS375 Class Notes Chap 01
Introduction
1.1 Overview 1
1.2 The Main Components of a Computer 3
1.3 An Example System: Wading through the Jargon 4
1.4 Standards Organizations 18
1.5 Historical Development 19
1.5.1 Generation Zero: Mechanical Calculating Machines (1642–1945) 20
1.5.2 The First Generation: Vacuum Tube Computers (1945–1953) 22
1.5.3 The Second Generation: Transistorized Computers (1954–1965) 27
1.5.4 The Third Generation: Integrated Circuit Computers (1965–1980) 29
1.5.5 The Fourth Generation: VLSI Computers (1980–????) 30
1.5.6 Moore’s Law 33
1.6 The Computer Level Hierarchy 34
1.7 Cloud Computing: Computing as a Service 37
1.8 The von Neumann Model 40
1.9 Non-von Neumann Models 43
1.10 Parallel Processors and Parallel Computing 44
1.11 Parallelism: Enabler of Machine Intelligence—Deep Blue and
Watson 47
Chapter Summary 49
TABLE 1.1 Common Prefixes Associated with Computer Organization and Architecture
VLSI (Very Large Scale Integration): more than 10,000 components per chip.
ENIAC-on-a-chip project, 1997
VLSI allowed Intel, in 1971, to create the world’s first microprocessor, the 4004,
which was a fully functional, 4-bit system that ran at 108KHz.
Intel also introduced the random access memory (RAM) chip, accommodating 4
kilobits of memory on a single chip.
Visit
o http://www.intel.com/about/companyinfo/museum/exhibits/moore.htm
o http://en.wikipedia.org/wiki/Moore's_law
In 1965, Intel founder Gordon Moore stated, “The density of transistors in an
integrated circuit will double every year.”
The current version of this prediction is usually conveyed as “the density of silicon
chips doubles very 18 months.”
Computer users typically do not care about terabytes of storage and gigahertz of
processor speed.
Many companies outsource their data centers to third-party specialists, who agree to
provide computing services for a fee. These arrangements are managed through
service-level agreements (SLAs).
Rather than pay a third party to run a company-owned data center, another approach
is to buy computing services from someone else’s data center and connect to it via the
Internet.
A Cloud computing platform is defined in terms of the services that it provides rather
than its physical configuration.
Cloud computing models:
o Software as a Service (SaaS):
A Cloud provider might offer an entire application over the Internet, with no
components installed locally.
The consumer of this service buy application services. The consumer of this
service does not maintain the application or need to be at all concerned with
the infrastructure in any way.
von Neumann computer execute instructions sequentially and are therefore extremely
well suited to sequential processing.
Harvard architecture: Computer systems have separate buses for data and
instructions.
Many non-von Neumann systems provide special-purpose processors to offload
work from the main CPU.
Parallel processors are technically not classified as von Neumann machines because
they do not process instructions sequentially.
Parallel processing allows a computer to simultaneously work on subparts of a
problem.
Parallel computing
o In the late 1960s, high-performance computer systems were equipped with dual
processors to increase computational throughput.
o In the 1970s supercomputer systems were introduced with 32 processors.
o Supercomputers with 1,000 processors were built in the 1980s.
o In 1999, IBM announced its Blue Gene system containing over 1 million
processors, each with its own dedicated memory.
Multicore architectures are parallel processing machines that allow for multiple
processing units (often called cores) on a single chip.
Each core has its own ALU and set of registers, but all processors share memory and
other resources.
“Dual core” differs from “Dual processor.”
o Dual-processor machines, for example, have two processors, but each processor
plugs into the motherboard separately.
o All cores in multicore machines are integrated into the same chip.
Multi-core systems provide the ability to multitask
o For example, browse the Web while burning a CD
Multithreaded applications spread mini-processes, threads, across one or more
processors for increased throughput.
o Programs are divided up into thread, which can be thought of as mini-processes.
o For example, a web browser is multithreaded; one thread can download text,
which each image is controlled and downloaded by a separated thread.
Examples of non-von Neumann languages including:
o Lucid: for dataflow
o QCL: Quantum Computation Language for quantum computer
o VHDL and Verilog: Languages used to program FPGAs
The quest for machine intelligence has been ongoing for over 300 years.
The 20th Century witnessed the first machines that could be human grandmasters at
chess when Deep Blue beat Garry Kasparov in 1997.
But the machine and the algorithm relied on a brute force solution, although
impressive, hardly “intelligent” by any measure.
Any definition of true machine “intelligence” would have to include the ability to
acquire new knowledge independent of direct human intervention, and the ability to
solve problems using incomplete and perhaps contradictory information.
This is precisely what IBM achieved when it is built the machine named Watson.
Watson proved this when it beats two human Jeopardy! champions on February 16,
2011.
Watson had a massively parallel architecture dubbed DeepQA (Deep Question and
Answer).
The system relied on 90 IBM POWER 750 servers.
Each server was equipped with four POWER7 processors, and each POWER7
processor had eight cores, giving a total of 2880 processor cores.
While playing Jeopardy!, each core had access to 16TB of main memory and 4TB of
storage.
Watson's technology has been put to work in treating cancer.
o Commercial products based on Watson technology, including “Interactive Care
Insights for Oncology” and “Interactive Care Reviewer,” are now available.
Watson is also becoming more compact: Watson can now be run on a single POWER
750 server.
Watson has surely given us a glimpse into the future of computing.