Comp 101 PDF
Comp 101 PDF
Comp 101 PDF
Learning Outcomes
By the end of this topic, you will expect to learn the following:
Definition of a Computer:
Develop a clear understanding of what a computer is, including its essential characteristics as an
electronic device that processes data and performs calculations based on a set of instructions
(software).
Historical Evolution
Understand the main components of a computer system, including the Central Processing Unit
(CPU), memory (RAM and storage), input/output devices, and networking components.
Computer Architecture:
Comprehend the basic architecture of a computer system, including how the CPU, memory, and
other components interact to execute programs.
3. Functions of a Computer
Data Processing
Explain how computers process data, including the concepts of input, processing, output, and
storage.
Types of Operations
Identify and describe the types of operations that computers perform, such as arithmetic
operations, logical operations, data transfer, and control operations.
Software Interaction
Understand the role of software in controlling computer hardware and executing tasks, including
the distinction between system software (e.g., operating systems) and application software.
Classification of Computers
Purpose-Specific Systems
Recognize the different purposes and environments in which various computer systems are
used, from general-purpose computing to specialized applications like gaming, scientific
computing, and automation.
Understand the differences between sequential and parallel computing models, and how
these influence the performance and application of computer systems.
Distributed Computing
Explain the concept of distributed computing and how it allows for processing across
multiple systems, such as in cloud computing.
Real-World Applications
Identify and describe various real-world applications of computers in fields like healthcare,
education, finance, and entertainment.
Societal Impact
7. Future Trends
Emerging Technologies
Explore emerging trends in computer technology, such as quantum computing, artificial
intelligence, and the Internet of Things (IoT), and how they might shape the future of computing.
8. Practical Skills
Basic Troubleshooting
Develop basic troubleshooting skills to diagnose and fix common computer problems
related to hardware and software.
Programming Basics
Gain a fundamental understanding of programming concepts and how they relate to the
operation and functionality of computers.
Analytical Skills
Cultivate the ability to analyze and solve problems using computers, understanding how to
leverage computer functions for various tasks.
Computational Thinking
Introduction to Computer
What is a Computer?
An electronic device that accepts data from the user, processes it, produces results, displays them
to the users, and stores the results for future usage. Data is a collection of unorganized facts &
figures and does not provide any further information regarding patterns, context, Hence data
means “unstructured facts and figures”.
Paying bills, buying groceries, using social media, seeking entertainment, working from home,
and communicating with a friend are done using a computer.
Information is a structured data, organized meaningful and processed data. A computer is used
to process the data and convert into information.
Functions of Computers
a) Receiving Input
Data is fed into the computer through various input devices like keyboard, mouse, and digital
pens. Input can be fed through devices like CD-ROM, pen drives, and scanners.
Operations on the input data are carried out based on the instructions provided in the programs.
After processing, information gets stored in the primary or secondary storage area.
d) Producing output
Processed information and other details are communicated to the outside world through output
devices like monitor, printer.
Computer Concepts & Description
1. History of Computers
The history of the computer dates back several years. There are five prominent generations of
computers. Each generation has witnessed several technological advances which changed the
functionality of computers.
Computers play a role in every field of life. They are used in homes, businesses, educational
institutions, research organizations, medical fields, government offices, entertainment, etc.
Computer systems consist of three components as shown in below image: Central Processing
Unit, Input devices and Output devices.
i) Input Devices
The computer system responds to the instructions it receives from the users. And to get those
instructions, a computer needs an input unit. This includes all the input devices responsible for
reading the data entered by the user.
The system doesn’t respond unless it receives a command from the user using the input unit or
the input devices. The users use numbers, letters, images, etc. to enter the command, the input
devices are the ones accepting them. For example – we use a keyboard to enter a text, the
keyboard here becomes the input unit.
3. Sends over the data to the processing unit for the next step.
1. Keyboard
2. Mouse
3. JoyStick
4. Light pen
5. Track Ball
6. Scanner
7. Graphic Tablet
b) Output Devices
The user, when sending a set of instructions to the computer, reaches the output device in the
end. The execution of the command takes place here and the users get their results.
The processor sends the transcripted instructions to the output devices for execution. These
devices always have a connection to the system and thus the coordination is quite easy.
Technology is evolving rapidly!
The monitor is one of the main output devices which displays the results to the user. Everything
that the input devices receive, reaches the output devices eventually. All the execution activities
take place inside the mechanism of a device.
3. Converts the machine language back into a user-friendly one by completing tasks.
4. It is the medium by which users understand that their demands are met by the system.
1. Monitor – variants
3. Plotters
4. Projector
7. Speaker
8. Headphones
9. Ear Plugs
c) Computer Memory
Computer memory refers to the storage area where data is stored. It is of two types Primary
Memory & Secondary Memory.
This component of the CPU deals with strong data. When the data reaches the processor from the
input devices, the memory unit saves it immediately. It has some pre-existing programs which
help in transmitting the data to the other parts of the CPU. Similarly, the completion of a task by
output device is also saved here before it reaches the user. The processor cannot process the data
unless the memory unit saves it.
This is where all the information becomes accessible for the user. It uses bits and bytes to
measure data size. The memory unit further divides into primary and secondary storage units.
The primary memory is internal and temporary. RAM is the primary memory in this case. It
stores the commands for a short time and is volatile in nature.
The secondary storage is non-volatile and permanent. But not directly accessible. The data needs
to transfer to the primary unit and then the processor can access it.
3. Storing all the steps that the system goes through during task execution.
Hardware refers to mechanical devices that makes up a computer. Software can be categorized
into two types
1. System software.
2. Application software
5. Programming Languages
The Programming languages are used to write a program or set of instructions, are broadly
categorized into three types
3. High-level language.
6. Representation of Data/Information
Computers do not understand human language. Any data, viz., letters, symbols, pictures, audio,
videos, fed to the computer should be converted to machine language first. Computers represent
that data into different forms.
Data processing is a process of converting raw facts or data into meaningful information.
Applications of IECT
A computer system is made of several components which have specific functions and features to
support the entire system. While there are many devices under one system, they are divided into
three basic components of the computer system. They make data processing and performing
tasks easier and more convenient.
The main three components are – Input Unit, Output Unit, and Central Processing Unit. The
central processing unit is further divided into memory or storage unit, control unit, and an
arithmetical and logical unit. Each unit has its own special feature to assist the system.
The input unit is mandatory for taking in the instructions, the processing unit is important for
understanding them and the output unit enables the delivery of results. They all rely on each
other at every step and together accomplish all the tasks.
One thing to keep in mind is that the external appearance of these units might differ from system
to system. But at the end of the day, they will perform the same set of tasks as others. Let us take
a look at each of the components in detail with their distinctive functions.
Before 20th century, most information was processed manually or by use of simple machines.
Today, millions of people are using computers in offices and at home to produce and store all
types of information
The following are some of the attributes that make computers widely accepted & used in the day-
to-day activities in our society
Speed.
A computer works with higher speed and accuracy compared to humans while performing
mathematical calculations. Can process millions (1,000,000) of instructions per second. The time
taken by computers for their operations is microseconds and nanoseconds.
The speed of a computer is linked to the technology used to build it. Computers operate at very
high speeds and can perform very many functions within a very short time.
1. They can perform a complicated task much faster than a human being.
Computers were built using Vacuum tubes, and the speed was measured in Milliseconds. In this
case a computer could perform 5,000 additions & 300 multiplications per second.
Built using Transistors and operation speeds increased & measured in Microseconds. A
computer could perform 1 million additions per second.
c) Mid 1960s.
Integrated Circuit (IC), combining a number of transistors & diodes together on a silicon chip
was developed. The speed increased to tens of millions of operations per second.
d) 1971
Intel Corporation produced a very small, single chip called a Microprocessor, which could
perform all the operations on computer’s processor. The chip contained about 1,600 transistors.
e) microprocessors
Microprocessors are very powerful, cheaper & more reliable due to the use of the Large Scale
Integration (LSI) & Very Large scale Integration (VLSI) technologies, which combines hundreds
of thousands of components onto a single chip. Computer speeds are measured in Nanoseconds
& Picoseconds.
Accuracy
Computers perform calculations with 100% accuracy. Errors may occur due to data
inconsistency or inaccuracy.
Computers are very accurate and never make mistakes. A computer can work for very long
periods without going wrong. When an error occurs the computer has a number of in-built, self-
checking features in their electronic components that can detect & correct such errors. Errors are
committed by users entering the data.
Reliability
A computer is reliable as it gives consistent result for similar set of data, if given same set of
input any number of times, we will get the same result.
Computer can be relied upon to produce correct answer if it is given correct instructions &
supplied with correct data.
Consistency
Computers are usually consistent, when given the same data & the same instructions, they will
produce the same answer every time that particular process is repeated.
Storage
Computer is capable of storing large amounts of data or instructions in a very small space. Can
store data & instructions for later use, and it can produce/ retrieve this data when required so that
the user can make use of it. Data stored in a computer are protected from unauthorized
individuals through the use of passwords.
Diligence
A computer perform millions of tasks or calculations with the same consistency and accuracy. It
doesn’t feel any fatigue or lack of concentration. Its memory also makes it superior to that of
human beings.
A computer can work continuously without getting tired or bored. Even if it has to do a million
calculations, it will do the last one with the same speed and accuracy as the first one.
Automation
A computer is an automatic device once given the instructions, it is guided by the same
instructions and can carry on its job automatically until it is complete. It can perform a variety of
jobs as long as there is a well-defined procedure. Computer performs all the tasks automatically,
it performs tasks without manual intervention.
Versatile
Versatility refers to the capability of a computer to perform different kinds of works with same
accuracy and efficiency. A computer can be used in different places to perform a large number of
different jobs depending on the instructions fed to it.
Memory
A computer has built-in memory called primary memory where it stores data. Secondary storage
are removable devices such as CDs, pen drives used to store data.
Since a computer can only work with a strict set of instructions, it identifies and imposes rigid
rules for dealing with the data it is given to process.
Components of Computer
A computer device is made up of various elements which help in its effective functioning and
processing. There are five basic components of the computer that help in making the processing
of data easier and more convenient.
The components of a computer system are the primary elements that make the functioning of an
electronic device smooth and faster. There are five basic components which include:
1. Input Unit
2. Output Unit
3. Memory Unit
4. Control Unit
The exterior of any computerized device may look different and may also have varied features,
but the basic components remain the same for their functioning.
Given below are the 5 components of a computer along with their purpose and functions.
Input Unit
A computer will only respond when a command is given to the device. These commands can be
given using the input unit or the input devices.
Using a keyboard type things on a Notepad and the computer processes the entered data and then
displays the output of the same of the screen.
The data entered can be in the form of numbers, alphabet, images, etc. We enter the information
using an input device, the processing units convert it into computer-understandable languages
and then the final output is received by a human-understandable language.
Output Unit
A computer is Command to perform a task, it reverts for the action performed and gives us a
result. This result is called output. There are various output devices connected to the computer.
The most basic of which is a monitor. Whatever we write using a keyboard or click using a
mouse, is all displayed on the monitor.
The output unit gives us the final result once the entire processing is done within the mechanism
of a device.
A visit to an ATM, enter details like language, pin, amount to be withdrawn, etc. and then the
final money that the cash dispenser releases is our outcome. Cash dispenser acts as an output
unit.
Memory Unit
When we enter the data into the computer using an input device, the entered information
immediately gets saved in the memory unit of the Central Processing Unit (CPU). Because of the
presence of some existing programming, the Memory Unit transmits the data further to the other
parts of the CPU.
Similarly, when the output of our command is processed by the computer, it is saved in the
memory unit before giving the output to the user.
Control Unit
This is the core unit that manages the entire functioning of the computer device. It is one of the
most essential components of the computer system.
The Control Unit collects the data entered using the input unit, leads it on for processing, and
once that is done, receives the output and presents it to the user. It can be said that the centre of
all processing actions taking place inside a computer device.
This unit is to manage the computer device functioning and is the central component of the
processor unit. Once the data is in the memory, it processes it for further execution. It is where
the data conversion from human language to machine language takes place. It interprets the
signal and sends it over to the output unit. Once the result is out, it retrieves the data again and
presents it to the user.
This unit of processor takes care of mathematical calculations and issues that the computer
system faces while functioning. It is also useful for data comparison and actions including
decision making. It has features to facilitate different mathematical solutions like addition,
subtraction, multiplication, etc.
ALU gets the data from the memory in the form of registers. These registers are for a memory
address, data manipulation, and processing. They may have distinctive features sometimes. The
ALU performs the calculation only when needed and then sends it to the output devices.
3. Enables the data transfer between primary and secondary memory by decoding it.
a. Memory Unit
b. Control Unit
All these three units are elements of CPU and together help in the efficient working and
processing of data. It is also known as the “Brain of Computer” and no action can be conducted
by a device without the execution and permission of the Central Processing Unit.
The device is a close-knit circuit comparison microprocessor which helps in fetching the data
and proving suitable results to the user. Thus, CPU is the main processing unit of the computer.
CPU acts as the computer’s brain, handling all tasks and coordinating communication among
different parts. It’s incredibly crucial as it’s the main driving force behind every operation.
Without it, nothing else in the computer functions. To power up and do its job, the CPU relies on
the energy supplied by the power source. It’s essential to regularly maintain the CPU by cleaning
it, removing dust, checking the fan, and ensuring all circuits are connected. Inside the CPU
reside vital components such as circuit boards, memory, and the integral control center—the
RAM.
Motherboard
Think of the motherboard as the heart of the computer. It’s the central electrical circuit that
connects all the different parts, distributing power as needed and transmitting information across
the entire system. It’s a pivotal junction responsible for processing information and data within
the computer. Housing components like the CPU, memory, and secondary storage devices, the
motherboard ensures these parts communicate effectively. Given its sensitivity, it requires
maintenance checks to prevent issues from high temperatures, pressure, or humidity that might
cause malfunctions or electrical problems.
RAM serves as the computer’s short-term memory, storing data, processes, and commands
actively being used. When you open software or run an application, the RAM keeps it active and
working smoothly. Different computers might use various types of RAM. The more RAM a
computer has, the better its processing power and multitasking ability. Users often upgrade their
computer’s RAM to complement system updates or software enhancements. Regularly clearing
temporary files and shutting down the computer after use helps optimize RAM efficiency.
The VGA port acts as a bridge between the computer and the screen, transmitting visual
information from the computer to the display. It’s commonly found on the back or side of
screens and facilitates connections between different devices, like projectors or additional
screens. The quality of the visuals often depends on the number of connectors available in the
port. In some cases, newer computers or compact laptops might not feature a VGA output port
due to their design constraints. In such instances, a signal converter becomes necessary to link
these devices with screens or projectors.
Power Supply
The power supply fuels the entire computer system by providing electricity. Usually located at
the CPU or PC tower’s back, the power cord plugs directly into the electricity socket. Places with
varying power may employ a UPS (uninterrupted power supply) unit to safeguard against
fluctuations affecting computer performance. Many modern PCs come with built-in UPS units.
Laptops rely on rechargeable batteries that need periodic charging. Over time, laptop batteries’
performance diminishes, impacting their lifespan and efficiency.
Cooling Fan
Computers generate heat while operating, and cooling fans are crucial for regulating internal
temperatures and preventing overheating. They circulate air within the system, ensuring optimal
functioning. High-end systems might feature multiple cooling fans to support heavy-duty tasks
like gaming or professional applications such as video editing. It’s essential to check and clean
the fan regularly if your computer frequently overheats to remove any debris that might hinder
its performance.
Hard Drive
Hard drives are storage devices that house files, information, and programs, storing data digitally
on magnetically coated discs. Higher storage capacity enables more data storage, and external
removable storage devices offer efficient data management. Cloud-based storage services
provide an alternative for freeing up system storage. As hard drives can fail, it’s advisable to
regularly back up data to prevent data loss.
Display Monitor
The display monitor serves as the computer screen for viewing programs and operating the
system. Resolution and pixel density determine a monitor’s sharpness and quality. Protecting
your eyes from the artificial light emitted by screens is essential; prolonged exposure can be
harmful. Using anti-glare films or wearing glasses helps minimize these effects.
Keyboard
The keyboard serves as the primary input tool for text, characters, and commands. Featuring
keys for alphabets, numerals, symbols, and special commands, various types of keyboards are
available, with most sharing a standard layout. Keyboards can be wired or wireless and often
feature backlit keys for nighttime or low-light usage, aiding visibility and convenience. In virtual
PCs or tablets, keyboards may be digital, mimicking the physical keyboard’s functionality
Operating system is a software that controls system’s hardware and interacts with user and
application software. An operating system is computer’s chief control program.
c. Coordinates how the program works with hardware and other software.
h. Provides resources that copy or move data from one document to another, or from one
program to another.
j. Recognizes keystrokes or mouse clicks and displays characters or graphics on the screen.
User Interface
While working with a computer, we use a set of items on the screen called "user interface". In
simple terms, it acts as an interface between the user and software application or program
Running an Application
The operating system offers an interface between programs and user, as well as programs and
other computer resources such as memory, printer and other programs.
We will learn different settings in the Operating System such as changing system date and time,
changing display properties, etc.
A file is nothing but a collection of information. The information can be of numbers, characters,
graphs, images, etc. A directory is a place/area/location where a set of file(s) will be stored.
The file management system is software that is used to create, delete, modify, and control access
and save files.
Types of Files
There are five types of files such as Ordinary files, Directory files, Device files, FIFO files
It’s also used to add images, preview the complete text before printing it; organize the data into
lists and then summarize, compare and present the data graphically. It allows the header and
footer to display descriptive information, and to produce personalized letters through mail. This
software is used to create, format and edit any document. It allows us to share the resources such
as clip arts, drawing tools, etc. available to all office programs.
Word processor is used to manipulate text documents. It is an application program that creates
web pages, letters, and reports
Word Processing Concepts & Description
Opening Word Processing Package
Word automatically starts with a blank page. For opening a new file, click on "New".
Page Setup
Page setup options are usually available in "Page Layout" menu. Parameters defined by the user
help in determining how a printed page will appear.
Print Preview
This option is used to view the page or make adjustments before any document gets printed.
In this section, we shall learn how to use cut, copy and paste functions in Word.
Table Manipulation
Manipulation of table includes drawing a table, changing cell width and height, alignment of text
in the cell, deletion/insertion of rows and columns, and borders and shading.
SPREADSHEET
Microsoft Excel is a spreadsheet application that is used to create and manage lists of
information. Excel allows you to enter, edit, manage, and analyze large amounts of data in a
worksheet and create colorful charts and graphs. It uses formulae to calculate and analyze data. It
helps to combine a series of commands using "Macros", thus saving time. At higher levels, you
can use it as a complete development tool catering to many complex requirements.
The topics explaining the entire concepts related to spreadsheets in detail, i.e., Elements of an
electronic spreadsheet, manipulation of cells, functions, and charts.
Manipulation of Cells
Manipulation of cells is entering and modifying the contents of the cells.
Here, we will look into how to create text series, how to create number series and how to create
data series
Modifying or adding text or using cut, copy, paste operations to an existing document is known
as editing.
We shall learn how to use functions and charts in Microsoft Excel Using Formulas like Addition,
Subtraction, Multiplication, and Division
Chart
A chart is a graphical representation of worksheet data. Charts can make data interesting,
attractive and easy to read and evaluate. They can also help you to analyze and compare data.
Example
Procedure
e. In the title bar, Click on chart title box and type, population of metropolitan cities.
Result
The given database is created in excel worksheet using the bar chart.
Understand the detailed description about the concepts of opening new and existing worksheets,
renaming the work sheet, organizing spread sheet, printing spread sheet, saving workbooks,
manipulation of cells, entering text, numbers and dates, creating text, number and date series,
editing worksheet data, inserting and deleting rows & columns, changing cell height and width,
using formulas, and creating a chart. This chapter also focused on cell address, numbers and text,
title bar, menu bar, formula bar, and functions & charts.
The Internet is a global communication system that links together thousands of individual
networks. It allows the exchange of information between two or more computers on a network.
Thus Internet helps in the transfer of messages through mail, chat, video & audio conferences,
etc. It has become mandatory for day-to-day activities: bill payment, online shopping, surfing,
tutoring, working, communicating with peers, etc.
In this topic, we are going to discuss in detail concepts like basics of computer networks, Local
Area Network (LAN), Wide Area Network (WAN), the concept of the internet, basics of internet
architecture, services on internet, World Wide Web and websites, communication on internet,
internet services, preparing computer for internet access, ISPs and examples
(Broadband/Dialup/Wi-Fi), internet access techniques, web browsing software, popular web
browsing software, configuring web browser, search engines, popular search engines/search for
content, accessing web browser, using favorites folder, downloading web pages and printing web
pages.
Internet Architecture
Internet is called the network of networks. It is a global communication system that links
together thousands of individual networks. Internet architecture is a meta-network, which refers
to a congregation of thousands of distinct networks interacting with a common protocol
Services on Internet
Internet acts as a carrier for numerous diverse services, each with its own distinctive features and
purposes.
Communication on Internet
Communication can happen through the Internet by using Email, Internet Relay Chat, Video
Conference etc.
A chart is a graphical representation of worksheet data. Charts can make data interesting,
attractive and easy to read and evaluate. They can also help you to analyze and compare data.
“World Wide Web” or simple “Web” is the name given to all the resources of internet. The
special software or application program with which you can access web is called "Web
Browser".
Search Engine is an application that allows you to search for content on the web. It displays
multiple web pages based on the content or a word you have typed.
Search Engines
Search Engine is an application that allows you to search for content on the web. It displays
multiple web pages based on the content or a word you have typed.
Search Engine help to search for content on the web using the different stages
There are several ways to access a web page like using URLs, hyperlinks, using navigating tools,
search engine, etc.
In this topic, we are going to discuss in detail about basics of email, email addressing,
configuring email client, using emails, opening email client, mailbox, creating and sending a new
email, replying to an email message, forwarding an email message, sorting and searching emails,
advance email features, sending documents by email, activating spell check, using address book,
sending softcopy as attachment, handling spam, instant messaging and collaboration, using
emoticons and some of the internet etiquettes.
Basics of E-mail
Electronic mail is an application that supports the interchange of information between two or
more persons. Usually, text messages are transmitted through email. Audio and video transfer
through email depends on the browser in use. This provides a faster way of communication at an
affordable cost.
Advantages of E-mail
Functionalities like attachment of documents, data files, program files, etc., can be enabled. This
is a faster way of communication at an affordable cost.
Disadvantages of E-mail
If the connection to the ISP is lost, then you can’t access email. Once you send an mail to a
recipient, you have to wait until she/he reads and replies to your mail.
Email Addressing
Email address is a unique address given to the user that helps to identify the user while sending
and receiving messages or emails.
Configuring email client is setting up a client which includes the various steps.
Using E-mails
The main purpose of using email is to exchange information between persons. The process starts
with opening of client email and ends with sending and verifying mail to recipients.
Email provides many advanced features which includes sending attachments like documents,
videos, images, audio, etc.
Instant messaging is real time mutual communication between persons via internet. This is a
private chat. Once the recipient is online, you can start sending messages to him/her.
Internet etiquettes
Internet etiquettes are also called as “Netiquette”. Netiquettes are basic rules or techniques which
are accepted worldwide
Presentations
Microsoft PowerPoint is one of the powerful tools of MS-Office, which helps in creating and
designing presentations. PowerPoint Presentation is an array of slides that convey information to
people in an attractive manner.
Details about the applications of presentation using Microsoft PowerPoint, opening and saving a
presentation, creating presentation using templates and a blank presentation, entering and editing
text, inserting and deleting slides in a presentation, preparing slides, inserting word table or an
excel worksheet and other objects, adding clip arts, resizing and scaling of objects, providing
aesthetics by enhancing text presentation, working with colors and line style, adding movie and
sound, header and footer, viewing a presentation, choosing a set up for presentation, printing
slides and handouts, Slide Show, running a Slide Show, transition and slide timings, automating
a Slide Show.
Using PowerPoint
Microsoft PowerPoint is one of the powerful tools of MS-Office, which helps in creating and
designing presentations
Creation of Presentation
A presentation is made up of number of slides that are displayed in a sequence. Each slide has
sub-topics and different content related to the given topic.
Preparation of slides
Preparation of slides involves inserting a word table and Excel worksheet, adding clip art
pictures and inserting other objects
Providing Aesthetics
This feature helps our PowerPoint presentation to look more attractive and interesting.
Program Example
Here will create a simple presentation with at least 5 slides to introduce a friend and include
audio in slides.
Presentation of Slides
Presentation of Slides has the feature like viewing a presentation, choosing a set up for
presentation, printing slides
Slide Show
Slide Show view of the presentation is used to display content of presentation to the audience.
Editing is not possible in the Slide Show view.
Example
Create a simple presentation with at least 5 slides on the essay, “An astrologer’s day”
a. Procedure
d. Once you open PowerPoint, choose the type of presentation you want and click Ok.
f. Draw the text box in the slide and enter information about the essay, “An astrologer’s day”.
g. Right click on the text box and select custom animation in it.
l. Click the first slide and drag the mouse to select all the slides.
m. Run your presentation by clicking on "From Beginning" option from Slide Show or by
pressing F5 key.
n. Result
Topic 1 Summary
1. Definition of a Computer
Hardware:
Central Processing Unit (CPU): The brain of the computer that executes instructions
and processes data.
1. Memory (RAM and Storage): RAM (Random Access Memory) temporarily stores
data for quick access, while storage devices (like SSDs or HDDs) hold data permanently.
2. Input/ Output Devices: Input devices (e.g., keyboard, mouse) allow users to interact
with the computer, while output devices (e.g., monitors, printers) display results.
3. Motherboard and Power Supply: The motherboard connects all components, and the
power supply provides the necessary electrical power.
Software
1. System Software: Includes the operating system (e.g., Windows, macOS, Linux),
which manages hardware resources and provides a platform for running applications.
2. Application Software: Programs that perform specific tasks for users, such as word
processing, web browsing, or gaming.
3. Functions of a Computer
2. Processing: The CPU processes the input data according to the software's
instructions, performing calculations and making decisions.
3. Output: The processed data is presented to the user through output devices, like a
display or printer.
4. Storage: Data and instructions are stored for future use, either temporarily in RAM
or permanently in storage devices.
4. Types of Computers
4. Embedded Systems: Specialized computers built into other devices, like cars or
home appliances, to control specific functions.
Computers have revolutionized almost every aspect of modern life, from business and
healthcare to education and entertainment. They enable the digital economy, drive
innovation, and support complex decision-making processes.
8. Future Trends:
By the end of this topic, you will be expected to have an understanding of the following concepts
with regard to computers.
1. Historical Understanding
Chronological Knowledge
Influential Figures
Identify and describe the contributions of key figures in the history of computing, such as
Charles Babbage, Alan Turing, John von Neumann, and others.
2. Generations of Computers
Generational Progression
Understand the progression of computer technology through the five generations, including
the characteristics and innovations of each generation:
Technological Advancements
Describe the key technological advancements that defined each generation of computers
and how they improved computing power, efficiency, and usability.
Understand how computer architecture has evolved over time, including the transition from
simple, single-task systems to complex, multi-core, and distributed systems.
Explain the significance of Moore’s Law in the evolution of computer systems, particularly
in terms of increasing processing power and reducing the size of computing devices.
Software Evolution
Trace the evolution of software from early machine code and assembly languages to high-
level programming languages and modern software development methodologies.
Operating Systems
Understand the role and evolution of operating systems in managing computer resources
and providing a user interface.
Networked Computers
Distributed Computing
Understand the evolution of distributed computing and its role in the development of cloud
computing and the modern digital landscape.
Describe how user interfaces have evolved from command-line interfaces to graphical user
interfaces (GUIs) and beyond, influencing the accessibility and usability of computers.
Human-Computer Interaction
Societal Changes
Evaluate the impact of evolving computer systems on society, including their role in
transforming industries, economies, and daily life.
Ethical Considerations
Reflect on the ethical implications of the evolution of computer systems, including issues
related to privacy, security, and the digital divide.
Modern Developments
Identify and describe current trends in computer systems, such as the rise of artificial
intelligence, machine learning, quantum computing, and the Internet of Things (IoT).
Discuss potential future developments in computer systems and how they might continue to
evolve, considering both technical advancements and societal needs.
Historical Analysis
Develop the ability to analyze historical trends in computing and apply this knowledge to
understand current and future technological developments.
Technological Literacy
Innovation Awareness
Problem-Solving
Generations of computers
Many different types of mechanical devices followed that built on the idea of the analytical
engine. The very first electronic computers were developed by Konrad Zeus in Germany in the
period 1935 to 1941. The Z3 was the first working, programmable and fully automatic digital
computer. The original was destroyed in World War II, but a replica has been built by the
Detaches Museum in Munich. Because his devices implemented many of the concepts we still
use in modern-day computers, Zeus is often regarded as the 'inventor of the computer.'
Around the same time, the British built the Colossus computer to break encrypted German codes
for the war effort, and the Americans built the Electronic Numerical Integrator Analyzer and
Computer, or ENIAC. Built between 1943 and 1945, ENIAC weighed 30 tons and was 100 feet
long and eight feet high. Both Colossus and ENIAC relied heavily on vacuum tubes, which can
act as an electronic switch that can be turned on or off much faster than mechanical switches,
which were used until then. Computer systems using vacuum tubes are considered the first
generation of computers.
Vacuum tubes, however, consume massive amounts of energy, turning a computer into an oven.
The first semiconductor transistor was invented in 1926, but only in 1947 was it developed into a
solid-state, reliable transistor for the use in computers. Like a vacuum tube, a transistor controls
the flow of electricity, but it was only a few millimetres in size and generated little heat.
Computer systems using transistors are considered the second generation of computers.
It took a few years for the transistor technology to mature, but in 1954 the company IBM
introduced the 650, the first mass-produced computer. Today's computers still use transistors,
although they are much smaller. By 1958 it became possible to combine several components,
including transistors, and the circuitry connecting them on a single piece of silicon. This was the
first integrated circuit. Computer systems using integrated circuits are considered the third
generation of computers. Integrated circuits led to the computer processors we use today
1930s marked the beginning of calculating machines, considered the first programmable
computers.
Konrad Zuse created what became known as the first programmable computer, the Z1, in 1936 in
his parent’s living room in Berlin. The picture below is of a gigantic the computer.
The 1940s saw the emergence of electronic computers, the ENIAC (Electronic Numerical
Integrator and Computer) and the EDVAC (Electronic Discrete Variable Automatic Computer).
These machines used vacuum tubes and punched cards for data processing. In the picture
attached below, you can see a scientist using ENIAC for computational purposes.
These first-gen computers relied on ‘machine language’ (which is the most fundamental
programming language that computers can understand).
These computers were limited to solving one problem at a time. Input was predicated on punched
cards and paper tape. Output emerged on printouts.
UNIVAC, the first American commercial computer, was launched in the United States. It was
also the first computer to have been mass-produced.
The IBM 701, IBM’s first mainframe, was another notable innovation in early commercial
computing.
The 704 was also released simultaneously with the FORTRAN programming language.
A smaller IBM 650 was developed in the 1950s. The smaller size and footprint of the IBM 650
made it popular, while it weighed more than 900kg and had an additional 1350kg power supply.
Advantages of first-generation:
1. The first generation was tough to hack and was quite strong.
2. The first generation could perform calculations quickly, in just one-thousandth of a second.
Disadvantages of first-generation:
In 1947 invention of the transistor by Bell Labs revolutionized computing. Transistors replaced
bulky vacuum tubes, making computers smaller, faster, and more reliable.
Second-gen computers still count on punched cards for input/printouts. In the above image, you
can see two computer engineers working on a computer transistor.
The language emerged from a binary language to a symbolic (‘assembly’) language. This meant
programmers could discover instructions in words.
Until 1965 computers used by mathematicians and engineers in a lab setting. Program changed
everything by offering the general public a desktop computer that anyone could use. The 65-
pound machine was the size of a typewriter and had 37 keys and a printer built-in. Can you
imagine yourself using this machine?
The invention of transistors made it possible to replace vacuum tubes with smaller computers.
Although they were initially less reliable than their predecessors, they use significantly less
power.
These transistors were the catalyst for innovations in computer peripherals. The IBM 350
RAMAC was first introduced in 1956. Remote terminals became more popular with second-
generation computers.
Advantages of first-generation:
2. Computers developed in this era were smaller, more reliable, and capable of using less
power.
Disadvantages of first-generation:
1. They were only used for specific objectives and required frequent maintenance.2. The
second generation of computer used punch cards for input, which required frequent maintenance.
Because of IC, the computer becomes more reliable and fast, requires less maintenance, is small
in size, is more affordable, and generates less heat. You can see in the image above how multiple
IC racks are used to power a computer.
The third generation computers significantly reduce the computational time. In the second
generation, the computational time was microsecond, which was decreased to the nanosecond.
In this generation, punch cards were replaced by mouse and keyboard
The Xerox Alto was created in the ’70s as a personal computer that could print documents and
send emails. What was most notable about the computer was its design, which included a mouse,
keyboard, and screen.
Intel’s 4004 microprocessor marked a pivotal moment in computing history. It was the world’s
first commercially available microprocessor and laid the groundwork for the personal computer
revolution.
The release of the IBM Personal Computer, powered by Microsoft’s MS-DOS operating system,
marked the beginning of the personal computer era. It set industry standards and paved the way
for the advancements of PCs.
The iMac G3 was launched in 1998 and quickly became known for its Bondi blue, clear casing.
The 38-pound iMac included USB ports, a keyboard, and a mouse. It was meant to be portable
and customizable. In the picture below, you can see how cute the iMac G3 looked!!
Fun fact of the day: The iMac was the first time Apple used the “I” to name its products,
explaining it stood for “internet,” “innovation,” and “individuality.”
Tim Berners-Lee’s invention of the World Wide Web revolutionized communication and
information access. The web made the internet user-friendly and accessible to the masses.
The advent of smartphones and tablets transformed computing into a complete mobile
experience, with powerful handheld devices becoming integral to daily life.
The microchip is one of the most important advances in computing technology. There was a lot
of overlap between transistor-based and chip-based computers during the 1960s.
A microchip was what triggered the development of minicomputers and microcomputers. These
computers were small enough that they could be afforded by small businesses and individual
owners.
The microchip also catalyzed the microprocessor, which was an important breakthrough
technology in personal computer development.
There were three different microprocessor designs released simultaneously. Soon after, models
from Texas Instruments (the TMS 1000) and Garret AiResearch (the Central Air Data Computer,
or CADC) followed.
4-bit processors were the first to be developed. In 1972, however, the 8-bit model was quickly
adopted.
AT&T Bell Labs created the first 32-bit single-chip microprocessor. It was fully 32-bit. It was
able to use 32-bit busses and 32-bit data paths, as well as 32-bit addresses, in 1980.
64-bit microprocessors were first introduced in some markets in the early 1990s. However, they
didn’t appear on the PC market until 2000.
Smartphones of today have faster processor speeds than desktop computers ten years ago and
more memory than smartphones of a similar age.
The Droid smartphone can perform basic computing tasks such as emailing and surfing the web.
The rise of mobile computing began in the 1980s with the development of laptops, and it
accelerated in the 2000s with the introduction of smartphones and tablets. Mobile computing has
changed the way people work and communicate, allowing them to access information and stay
connected on the go. The rise of mobile computing has also had a significant impact on the tech
industry, as companies have had to adapt to the changing needs and preferences of users.
Mobile computing has also presented new challenges, such as security concerns and the need for
new forms of interaction. However, advances in hardware and software have made mobile
devices more powerful and versatile than ever before, and they are likely to remain an important
part of the computing landscape for the foreseeable future.
In the 1980s, pocket computers were introduced. Many of these devices were abandoned in the
1990s.
There was a wide range of models, from Apple to Palm. The touchscreen interface was the main
feature of PDA, and they can still be made today, but have been mostly replaced by smartphones.
Smartphones are capable of performing most computing functions including browsing the
internet, emailing, and uploading photos and videos.
Another recent development in computing history is the creation of netbooks. Although netbooks
can perform many of the same functions as regular laptops, such as managing email, using basic
office software, and browsing the Internet, Many netbooks also have Wi-Fi connectivity and
mobile broadband options.
The Asus Eee PC 700, the first mass-produced netbook was released in 2007. Although initially
released in Asia, these netbooks were quickly released in the USA.
Other designers followed their example and produced additional models between 2008 - 2009.
The price of netbooks is generally lower, usually between US$200-US$600. Comcast offered a
2009 promotion that included free netbooks for customers who signed up for their cable internet
service.
Netbooks are available with Windows or Linux installed as standard. Soon, netbooks with
Android-based technology will be available from Asus and other manufacturers.
Computing’s history stretches over almost two centuries. This is more than most people know.
Computers’ history has seen a lot of changes, from the mechanical machines of the 1800s to
large mainframes in the mid-20th century to modern netbooks and smartphones.
Over the last 100 years, computing has evolved exponentially. It is impossible to predict the
future 100 years from now.
2. GUI (Graphics User Interface) technology was used in this generation to provide users with
better comfort.
1. They use complex VLSI Chips, and VLSI Chip manufacturing requires advanced
technology.
2. To build these computers, Integrated Circuits (ICs) were required, and to develop those,
cutting-edge technology was needed.
This is the computer generation that we use. We know that computer devices with artificial
intelligence technology are still in development.
Still, some of these technologies are emerging and being used, such as voice recognition or
ChatGPT. AI is an authenticity made possible by adopting parallel processing and
superconductors. In the future, computers will be revolutionized again by quantum computation,
molecular, and nanotechnology.
Today’s most innovative computers are tablets and iPads, which are simple touchscreens without
a keyboard, mouse, or a separate CPU.
Today’s computer market is filled with other computer models, including the MacBook Pro,
iMac, Dell XPS, and iPhones.
Analog computers
One advantage of analog computation is that it may be relatively simple to design and build an
analog computer to solve a single problem. Another advantage is that analog computers can
frequently represent and solve a problem in “real time”; that is, the computation proceeds at the
same rate as the system being modelled by it. Their main disadvantages are that analog
representations are limited in precision—typically a few decimal places but fewer in complex
mechanisms—and general-purpose devices are expensive and not easily programmed.
Digital computers
Mainframe computer
Mainframe computers were developed in the 1950s and were used primarily for business and
scientific purposes. These computers were larger and more powerful than early electronic
computers, and they were used for batch processing and time-sharing. Mainframe computers
made it possible to perform complex calculations and data processing tasks more quickly and
efficiently than ever before, which helped to advance business and scientific research.
The role of mainframes in business computing was significant, particularly in the fields of
banking and finance. They made it possible to automate certain business processes, such as
payroll and accounting, which increased efficiency and productivity. They also made it possible
to process large volumes of data quickly and accurately, which helped to drive innovation in the
financial industry.
The evolution of mainframe technology and applications has been significant since the 1950s,
with advances in hardware and software making these computers more powerful and versatile
than ever before. Today, mainframes are still used for tasks such as batch processing and time-
sharing, but they also have applications in fields such as healthcare, transportation, and logistics.
Mainframes continue to play a key role in business computing, and they are likely to remain an
important part of the computing landscape for years to come.
During the 1950s and ’60s, Unisys (maker of the UNIVAC computer), International Business
Machines Corporation (IBM), and other companies made large, expensive computers of
increasing power. They were used by major corporations and government research laboratories,
typically as the sole computer in the organization. In 1959 the IBM 1401 computer rented for
$8,000 per month (early IBM machines were almost always leased rather than sold), and in 1964
the largest IBM S/360 computer cost several million dollars.
These computers came to be called mainframes, though the term did not become common until
smaller computers were built. Mainframe computers were characterized by having (for their
time) large storage capabilities, fast components, and powerful computational abilities. They
were highly reliable, and, because they frequently served vital needs in an organization, they
were sometimes designed with redundant components that let them survive partial failures.
Because they were complex systems, they were operated by a staff of systems programmers, who
alone had access to the computer. Other users submitted “batch jobs” to be run one at a time on
the mainframe.
Such systems remain important today, though they are no longer the sole, or even primary,
central computing resource of an organization, which will typically have hundreds or thousands
of personal computers (PCs). Mainframes now provide high-capacity data storage for Internet
servers, or, through time-sharing techniques, they allow hundreds or thousands of users to run
programs simultaneously. Because of their current roles, these computers are now called servers
rather than mainframes.
Supercomputer
The most powerful computers of the day have typically been called supercomputers. They have
historically been very expensive and their use limited to high-priority computations for
government-sponsored research, such as nuclear simulations and weather modeling. Today many
of the computational techniques of early supercomputers are in common use in PCs. On the other
hand, the design of costly, special-purpose processors for supercomputers has been replaced by
the use of large arrays of commodity processors (from several dozen to over 8,000) operating in
parallel over a high-speed communications network.
Minicomputer
Although minicomputers date to the early 1950s, the term was introduced in the mid-1960s.
Relatively small and inexpensive, minicomputers were typically used in a single department of
an organization and often dedicated to one task or shared by a small group. Minicomputers
generally had limited computational power, but they had excellent compatibility with various
laboratory and industrial devices for collecting and inputting data.
One of the most important manufacturers of minicomputers was Digital Equipment Corporation
(DEC) with its Programmed Data Processor (PDP). In 1960 DEC’s PDP-1 sold for $120,000.
Five years later its PDP-8 cost $18,000 and became the first widely used minicomputer, with
more than 50,000 sold. The DEC PDP-11, introduced in 1970, came in a variety of models, small
and cheap enough to control a single manufacturing process and large enough for shared use in
university computer centers; more than 650,000 were sold. However, the microcomputer
overtook this market in the 1980s.
Microcomputer
Laptop computer
The first true laptop computer marketed to consumers was the Osborne 1, which became
available in April 1981. A laptop usually features a “clamshell” design, with a screen located on
the upper lid and a keyboard on the lower lid. Such computers are powered by a battery, which
can be recharged with alternating current (AC) power chargers. The 1991 PowerBook, created
by Apple, was a design milestone, featuring a trackball for navigation and palm rests; a 1994
model was the first laptop to feature a touchpad and an Ethernet networking port. The popularity
of the laptop continued to increase in the 1990s, and by the early 2000s laptops were earning
more revenue than desktop models. They remain the most popular computers on the market and
have outsold desktop computers and tablets since 2018.
Embedded processors
Another class of computer is the embedded processor. These are small computers that use simple
microprocessors to control electrical and mechanical functions. They generally do not have to do
elaborate computations or be extremely fast, nor do they have to have great “input-output”
capability, and so they can be inexpensive. Embedded processors help to control aircraft and
industrial automation, and they are common in automobiles and in both large and small
household appliances. One particular type, the digital signal processor (DSP), has become as
prevalent as the microprocessor. DSPs are used in wireless telephones, digital telephone and
cable modems, and some stereo equipment.
Personal Computers
The personal computer revolution began in the 1970s with the development of the first
commercially available personal computers. The Apple I and the IBM PC were among the first
personal computers, and they were small, affordable machines that could be used by individuals
and small businesses. The rise of personal computers changed the way people worked and
interacted with technology, and it paved the way for the rise of the internet and the digital age.
Personal computers made it possible for individuals to perform tasks such as word processing,
spreadsheets, and graphics design from the comfort of their own homes. They also made it
possible to access and share information more easily than ever before, which helped to drive
innovation and collaboration in fields such as science, business, and education.
Today, personal computers continue to be an important part of our lives, and they are used for
everything from work and entertainment to communication and education. Advances in hardware
and software have made personal computers more powerful and versatile than ever before, and
they are likely to remain an important part of the computing landscape for the foreseeable future.
The latest developments in computing include quantum computing and artificial intelligence.
Quantum computing has the potential to solve complex problems that are beyond the capabilities
of classical computers, while artificial intelligence is already being used to automate tasks and
make decisions in a wide range of industries. However, the development of these new
technologies also raises important ethical questions, such as privacy, security, and bias.
The future of computing is likely to be shaped by the ongoing development of these technologies
and by the choices that society makes about how they are used. It is important for individuals,
businesses, and governments to carefully consider the potential impact of emerging technologies
and to work together to address the challenges they present. By doing so, we can create a future
that is brighter and more equitable for all.
Topic summary
1. Early Beginnings
1. Mechanical Devices: The evolution of computer systems began with early mechanical
devices such as the abacus and later, more complex machines like Charles Babbage's Analytical
Engine in the 19th century, which was a conceptual precursor to modern computers.
2. First Programmable Devices: The early 20th century saw the development of the first
programmable devices, like Konrad Zuse’s Z3 (1941) and Alan Turing’s conceptual Turing
machine, laying the groundwork for digital computing.
Vacuum Tubes: The first generation of computers used vacuum tubes for circuitry and
magnetic drums for memory. These machines were large, expensive, and consumed a lot of
power. Notable examples include the ENIAC and UNIVAC.
Transistors: Transistors replaced vacuum tubes, making computers smaller, faster, and more
reliable. This era also saw the development of high-level programming languages like COBOL
and FORTRAN.
Artificial Intelligence and Quantum Computing: The fifth generation is characterized by the
integration of artificial intelligence (AI) into computing systems, leading to smarter, more
autonomous devices. Quantum computing, which leverages quantum mechanics, promises to
solve complex problems far beyond the capabilities of classical computers.
1. From Simple to Complex Systems: Computer architecture evolved from simple, single-task
machines to complex, multi-core systems capable of multitasking and parallel processing. The
von Neumann architecture, which introduced the concept of stored-program computers, became
a foundational model.
2. Impact of Moore’s Law: Gordon Moore’s observation that the number of transistors on a
microchip doubles approximately every two years led to exponential growth in computing
power, enabling the rapid evolution of computer systems.
2. Operating Systems: The evolution of operating systems allowed for better resource
management, user interfaces, and the ability to run multiple applications simultaneously. This
development was crucial in the widespread adoption of computers in various sectors.
2. Distributed Computing: The rise of distributed computing and cloud computing has
allowed for the distribution of computational tasks across multiple machines, leading to
increased efficiency and the advent of services like cloud storage and online applications.
2. Challenges and Ethical Considerations: As computers have become more powerful and
ubiquitous, they have also raised new ethical challenges, including concerns about privacy,
security, and the digital divide.
1. Artificial Intelligence and Machine Learning: Current trends include the integration of AI
and machine learning into everyday applications, making systems more adaptive and intelligent.
3. The Internet of Things (IoT): The IoT connects everyday devices to the Internet, enabling
smarter and more efficient systems, from smart homes to industrial automation.
This summary provides an overview of the key stages and factors in the evolution of computer
systems, highlighting the technological advancements that have shaped modern computing and
its impact on society.
Topic 3
Computer Hardware
a. Input
b. Storage
c. Control
d. Processing
e. Output
2. Storage: Data and instructions enter main storage, and are held until needed to be
worked on. The instructions dictate action to be taken on the data. Results of action will
be held until they are required for output. Main storage is supplemented by auxiliary
storage, also called backing storage, e.g. hard disks for mass storage purposes. Backing
storage is servers an important role in holding ‘maintained data’, i.e. data held by the
computer so that it can provide information to the user when required to do so.
3. Control: The processor controls the operation of the computer. It fetches instructions
from main storage, interprets them and issues the necessary signals to the components
making up the system. It all directs hardware operation necessary in obeying instructions.
4. Processing: Instructions are obeyed and the necessary arithmetic operations etc. are
carried out on the data. The part of the processor that does this sometimes called
the Arithmetic-Logical Unit (ALU). Although in reality, as for the “control unit”, there
is often no physically separate component that performs this function. In addition to
arithmetic the processor also performs what is called “logical” operations. These
operations take place at incredibly high speeds, e.g. 10 million numbers may be totaled in
one second.
5. Output: Results are taken from main storage and fed onto an output device. This may be
printed text, sound, charts, and graphs displayed on a computer screen.
§ Data normally flows from input devices or backing storage into main storage main storage to
output devices
§ The processor performs operations on data from main storage and returns the results of
processing to main storage.
§ In some cases, data flows directly between the processor and input or output devices
§ The Arithmetic-Logical Unit (ALU) and control unit combine to form the processor.
§ There two types of flow shown in the figure; solid lines carry data or instructions but broken
lines carry commands or signals.
§ Data held on backing storage may be input into main memory during processing, used and
brought up to date using newly input data, and then returned to backing storage.
Input Units
Input units consist of devices that translate data into a form that computer can understand. e.g.
binary format. Divided into three types:
Keyboard hardware
Pointing devices
Source data-entry
Keyboard hardware: this is a device that converts letters, numbers, and other characters into
electrical signals that are machine-readable by the computer’s processors. It looks like typewriter
keyboard, and contains alphabetical & alphanumeric characters, numbers and other function
keys.
Pointing devices: control the position of the cursor or pointer on the screen. Example are; mice,
light-pens, touchpads, etc
Source data-entry devices: these refer to menu forms of data-entry devices that are not
keyboards or pointing devices. They create machine-readable data on magnetic media or paper or
feed it directly into the computer’s processor. They include: scanning devices, sensors, etc.
Computer Hardware
Central Processing Unit
The brain of a computer system is the central processing unit, which we generally refer to as
the CPU or mainframe. The central processing unit is the computer. It is the CPU that processes
the data transferred to it from one of the various input devices, and then transfers either the
intermediate or final results of the processing to one of many output devices. A central control
section and work areas are required to perform calculations or manipulate data. The CPU is the
computing center of the system. It consists of a control section, internal storage section (main
or primary memory), and arithmetic-logic section. Each of the sections within the CPU serves a
specific function and has a particular relationship to the other sections within the CPU.
Control Section
The control section may be compared to a telephone exchange because it uses the instructions
contained in the program in much the same manner as the telephone exchange uses telephone
numbers. When a telephone number is dialed, it causes the telephone exchange to energize
certain switches and control lines to connect the dialing phone with the phone having the number
dialed. In a similar manner, each programmed instruction, when executed, causes the control
section to energize certain control lines, enabling the computer to perform the function or
operation indicated by the instruction.
The program may be stored in the internal circuits of the computer (computer memory), or it
may be read instruction-by-instruction from external media. The internally stored program type
of computer, generally referred to only as a stored-program computer, is the most practical type
to use when speed and fully automatic operation are desired.
Computer programs may be so complex that the number of instructions plus the parameters
necessary for program execution will exceed the memory capacity of a stored-program
computer. When this occurs, the program may be sectionalized; that is, broken down into
modules. One or more modules are then stored in computer memory and the rest in an easily
accessible auxiliary memory. Then as each module is executed producing the desired results, it is
swapped out of internal memory and the next succeeding module read in.
In addition to the commands that tell the computer what to do, the control unit also dictates how
and when each specific operation is to be performed. It is also active in initiating circuits that
locate any information stored within the computer or in an auxiliary storage device and in
moving this information to the point where the actual manipulation or modification is to be
accomplished.
The four major types of instructions are (1) transfer, (2) arithmetic, (3) logic, and (4)
control. Transfer instructions are those whose basic function is to transfer (move) data from
one location to another. Arithmetic instructions are those that combine two pieces of data to
form a single piece of data using one of the arithmetic operations.
Logic instructions transform the digital computer into a system that is more than a high-speed
adding machine. Using logic instructions, the programmer may construct a program with any
number of alternate sequences. For example, through the use of logic instructions, a computer
being used for maintenance inventory will have one sequence to follow if the number of a given
item on hand is greater than the order amount and another sequence if it is smaller. The choice of
which sequence to use will be made by the control section under the influence of the logic
instruction. Logic instructions, thereby, provide the computer with the ability to make decisions
based on the results of previously generated data. That is, the logic instructions permit the
computer to select the proper program sequence to be executed from among the alternatives
provided by the programmer.
Control instructions are used to send commands to devices not under direct command of the
control section, such as input/output units or devices.
Arithmetic-Logic Section
Output Units
These are output devices that translate information processed by the computer into a form that
human can understand. They are divided into:
§ Softcopy output
Softcopy output devices: these are output devices that show programming instructions and data
as they are being input and information after it is processed. Examples monitor, flat panel
display, etc.
Hard Copy output devices: These are devices that print characters, symbols, and perhaps
graphics on paper or another hard copy medium. Examples are: printer, plotters, etc.
Other output devices: These refer to output hardware for sound output, voice output, video
output, virtual reality, and simulation devices. This includes: speaker, etc.
Storage Unit
These refer to devices used for storing data or computer instructions. They divided into three:
§ Main memory
§ Secondary memory
§ Registers
Main memory: this is used for holding data and instructions required immediately by the CPU.
It’s characterized by fast accessing information, low capacity and high costs. They are two main
types
They can both be read, to retrieve information or written into, to store information. The contents
of RAM remain stable as long as power is available i.e. volatile and has a short time response.
They provide permanent or semi-permanent storage only. Their contents can be read but cannot
be rewritten during normal computer operations. They are non –volatile
Secondary memory: It is used for storing backup information that is not needed immediately by
the CPU. They are characterized by slow access of information, higher capacity and lower cost.
Examples: hard disk, floppy.
Registers: high-speed circuits that is a staging area for temporarily storing data during
processing.
Optical Character Recognition (OCR) is the recognition of printed or written text characters by
a computer. This involves photo-scanning of the text character-by-character, analysis of the
scanned-in image, and then translation of the character image into character codes, such as
ASCII, commonly used in data processing.
Optical Mark reading (OMR) is a method of entering data into a computer system.
Optical Mark Readers reads pencil or pen marks made in pre-defined positions on paper forms
as responses to questions or tick list prompts.
Cache memory is a small-sized type of volatile computer memory that provides high-speed data
access to a processor and stores frequently used computer programs, applications and data.
Factors that should be considered when purchasing computer systems
1. Warranty
Warranty makes up the most important consideration for people when buying a system. Having
one covered with the right kind of hardware warrantee is essential and should be unconditional. I
still remember when Sony had to recall several Laptops due to a battery fault because of which
their Laptops caught fire. I’m sure none of us would like to have a Laptop that we can fry
omelets on without the guarantee of replacement.
2. Processor
The Processor is one of the most important parts of a system and can mean the difference
between a system that frequently hangs and the one that runs smoothly. Some people might think
that going for the low cost single core or Dual Core processors is a good idea but I would not
recommend either of them for anyone who requires using a system for more than basic usage.
Moreover, one might be better off buying a Corei3 system instead of a Core2Duo as the later
might be more expensive and less efficient than the former.
3. RAM
It is obvious that for more professional tasks and to run a Xeon Server there will be more RAM
required than for merely using a PC for browsing the internet. Moreover, the RAM type may
matter more than many people might consider. For example, there are not many applications
currently available that can take advantage of DDR3 RAM types, one may be better of saving
some dimes by buying a system with a DDR2 RAM.
4. Hard Disk
Hard Disk considerations might not mean much to a lay user, nonetheless, having the right
amount of disk space and disk type might be the necessary for the efficient management of
regular tasks for a professional.
5. Brand
Some brands offer better warrantees, whereas others offer software packages that come with the
system. For example, a Dell laptop with the same specifications may be cheaper than a Sony
Vio. The reason is that Sony provides many of its own softwares with their laptops which saves
the users cost for buying software, e.g. a DVD burning. However, if you already have many such
licensed or freeware software available, then it might be better to go for a cheaper brands.
6. Peripherals
Peripheral devices such as printers, scanners,etc can significantly increase or reduce your price
for buying a new PC. It might be better off buying a system with a DVD combo drive if you do
not require writing data to DVDs.
7. Size
Some people prefer buying larger laptops for a better display screen, whereas other prefer
smaller and more portable sizes. If you are setting up a server which will be placed in a server
room then size considerations will not mater much as large servers with multiple SCSI drives are
normally quite big. Whereas, for people with weak eye sights a laptop with a larger display
screen might be worth the price.
Users with minimum requirements may be better off buying a Windows Starter or Home
Premium version e.g. of Windows 7. For users who wish to take advantage of more enhanced
features and require more effective tools such as connecting their PC to a domain are naturally
better off buying a Professional or Ultimate Edition
9. Price
If you don’t have the cash to pay for lets say a graphic card with 1GB memory, than you might
be better off choosing suitable alternatives.
10. Usability
It is important that you first consider the tasks that you will be performing on your PC. If you
wish to buy a computer for simply browsing the internet and using some online services than it
might be better to buy a single core computer which satisfies your minimum requirements. On
the contrary if you require using it for heavy video editing and professional work, then it might
be better to buy a system that has enhanced multimedia options.
Topic 4
Data Representation
Data Representation:
Example
Data representation in a computer of base r is a system, which have r distinct symbols for r
digits. A number is represented by a string of these symbolic digits. To determine the quantity
that the number represents, we multiply the number by an integer power of r depending on the
place it is located and then fing the sum of weighted digits.
an and a-m are the most significant bit and least significant bit respectively.
Computers uses base 2 in its data representation since it is economical during construction of
computers circuitry in terms of cost, space power and any other characteristics.
0 and 1 are the only unit which a computer can accept and they are referred as bits. Eight bits is
equivalent to one byte. Binary system has got two stable states. Either high or low, i.e. 5 v to 0
V, True or false, On or Off e.t.c.
Example
Decimal Number
Take the decimal number 112 as an example to explain weight of each digit of the number.
= 2 + 10 +100
= 11210
= 8 + 90 +400 +6000
= 6498
Exercise
609 ,202, 1,
Example
a)11022 = 1x 20 + 0 x 21 + 1 x 22 + 1 x 23
= 1 + 0 +4 + 8
= 1310
b) 1012 = 1 x 20 + 0 x 21 + 1 x 22
=1+0+4
= 510
Exercise
Exercise
b) 1101.10102 to base 10
11012 = 1 x 20 + 0 x 21 + 1 x 22 +1 x 23 = 1310
0.10102 = 1 x 2-1+ 0 x2-2 + 1 x 2-3 + 0 x 2-4= 1/2+ 0 +1/8 +0 = 0.5 + 0.125 = 0.62510
1101.10102 =13.62510
The integer part is noted down after the multiplication by 2 at each step and the remainder new
fraction is used for multiplication by 2 at the next step
Example
Note that when last new fraction is greater than first new fractions then stop the process of
multiplication because of regress.
0.63510 = 0.10100012
1210 = 11002
0.62510 = 0.1012
12.62510 = 1100.1012
In the binary system the rules of binary when adding the number are as follows:
0 +0 = 0
0+1=1
1+0=1
1+ 1 = 0 with a carry of 1
Example
Solution
1001 9
0101 5
11102 1410
0111 7
0101 5
1 1002 1210
1010 10
1101 13
10111 2 2310
Binary Subtraction
a) 1110 – 0101
1110 14
0101 5
1001 9
b) 1010 – 0101
1010 10
0101 5
10012 5
c) 1010 – 0011
1010 10
0011 7
0111 3
d) 0101 – 0111
0101 5
0111 7
0101 0101
- Handles subtraction of integers as addition. Therefore computer does not requires different
circuits to handle addition and subtraction.
e) 0111 – 1000
0111 7
1000 8
0111 0111
e) Subtract 4 from 12
4 0100 0100
The base of the octal system is 8.An octal number system is represented by a group of three
binary bits. e.g. 4 is represented by 100. 6 by 110 and 7 by 111
46 by 100 110
568 = 6 x 80 + 5 x 81 = 6 + 40 = 4610
8 62 6
7 7
62 10 = 768 prove
We multiply the fraction by 8 to achieve the new fraction and write down the integer.
Group the binary bit in groups of 3 binary bits from left to right and then convert to its octal
equivalent
Example
Convert the binary number 1011102 to its equivalent octal number
1011102 = (101)(110)
5 68
Example
1 5 38
Example
In the integer part group 3 bits from right to left. In the binary fraction group 3 bits from left to
right.
1 3 .5 4 8
To convert an octal number to binary each octal number converted to 3 bit binary number
Example
= 0111111102
Example
= 101110.0111002
The base is 16. Its digits are from 0 - 15 from 10 - 15 are represented by A – F respectively
Hexadecimal digit is represented by four binary bits. e.g. 5 = 0101 A = 1010
If we have two or more digits we represent each digit by four binary bits
e.g 5A = 01011010
= 8 + 11 x 16 + 4 x 256
= 120810
= 13 + 96 +2816 + 8192
= 1111710
= 0.353190371010
Example
16 67 3
16 4 4
6710 = 4316
16 952 8
16 59 11
16 3 3
95210 = 3 11 816
= 3B816
16 1000 8
16 62 14
16 3 3
1000 = 3 14 816
= 3 E 816
Assignments convert 0.6210 to equivalent hexadecimal number. Hint create a table and multiply
by 16 the fraction keeping the integer and multiply new fraction by 16.
0.6210 = 0.9EB85116
Example
= 6 E16
= 3 4 D16
= 5 12 . 8 1016
= 5C.8A16
To convert hexadecimal number to its binary equivalent each digit is converted to 4 bits
Example
6B916
5 = 0110
B =11 = 1011
9 = 1001
6B916 = 0110101110012
6D.3A16 =
6 = 0110
D =13 =1101
3 =0011
A =1010
Example
3DE16 = 00111110111102
Group 3 bits together from left to right then change to decimal number
= 1 7 3 68
Example
= 1 3 3. 1 6 48
=1 5 E 16
Group 4 bits
FADE16 = X8
743468 = X16
DF4DF4F216 = ()2
101010101012 = ()8
11101.1112 = ()10
FAFA16 = ()10
4265118 = ()16
BABAFEFE16= ()8
Software refers to the various programs & data used in a computer system that enable it perform
a no. of specific functions.
Software instructs the computer on what to do and how to do it. All programs (software) are
written using programming languages.
Programmers usually write programs in Source Language (a language that is like broken
English). The Source language is then converted into Machine language; the language that the
computer can understand.
SOFTWARE FLEXIBILITY
The Software used on a given computer is said to be flexible, i.e. it is relatively easy to change.
For example, in a home computer used for playing games, instead of buying a new machine each
time a new game is needed, you only need to “load” a new program into the machine. Again, it is
Note. Programming languages can also be considered part of software, because they form the
basis of grammar on which the program’s development is based.
The following figure illustrates the computer software family tree.
Systems Software
This is a set of programs, which is developed & installed in a computer system for the purpose of
developing other programs, and to enhance the functional capabilities of the computer system.
System programs control the operation of the various hardware parts & make them available to
the user. They also enable users make efficient use of the computing facilities in order to solve
their problems.
System programs manage the computer resources such as Printers, Memory, disks, etc., automate
its operations & make easier the writing, testing and debugging of users‟ programs.
They also control the various application programs that we use to achieve a particular kind of
work.
Notes.
· System software is developed & installed by the manufacturer of the computer hardware.
This is because to write them, a programmer needs in-depth knowledge of the hardware details
of the specific computer.
· Some of the system software are supposed to put initial “life” into the computer hardware
and are therefore, held permanently in the ROM.
Program routines that are permanently maintained in the computer’s memory are called
· System programs dictate how the programs relate to the hardware, and are therefore said to
be
Hardware-oriented.
The Microprogram is held in the Control Unit (CU), and is used to interpret the external
Instruction set of a computer.
The Instruction set is the list of instructions available to the programmer that can be used to give
direct orders to the computer.
Firmware is usually a combination of hardware and software. It deals with very low-level
machine operations, such as moving data, making comparison, etc., and thus acts as an essential
substitute for additional hardware.
An Operating System is a set of programs designed to ensure the smooth running of the
computer system.
They are developed to manage all parts of the basic computer hardware & provide a more
hospitable interface to users and their programs.
It controls the way the way the software uses the hardware. This control ensures that the
computer system operates in a systematic, reliable & efficient manner as intended by the user.
OS are supplied by the computer manufacturer. They are designed to reduce the amount of time
that the computer is idle, and also the amount of programming required to use a computer.
Modern OS does a lot more than manage the hardware efficiently. It normally provides the user
with facilities that make the job of developing programs or doing something useful on the
computer much easier.
Utility programs are used by end-users to perform many of the routine functions & operations
such as, sorting, merging, program debugging, manage computer files, diagnose and repair
computer problems that occur, etc. They are normally supplied the manufacturers to enable the
computer to run more smoothly & efficiently.
Most OS have many of the Utility programs needed to assist with the upkeep of the computer.
For example, DOS 6.x includes utilities for managing memory, protecting a system of viruses,
backing up files, restoring accidentally deleted files, etc.
Searching.
They help to search for a file from one or more specified records. For example, in a Sales record,
the Search facility assists in finding the salesperson with the highest sales.
For example, from a USB drive to a hard disk & vice versa, or from a CD drive to hard disk.
Spell-checking of words.
After a document is typed, the words in the document are checked against those in a “custom
dictionary” in secondary storage. If any word used is not found in the dictionary, a warning is
given indicating a possible spelling error.
Formatting programs.
Before a disk drive can be used, it must be “initialized‟ or formatted. This means that, the system
must put certain information on the disk, which helps with the storing and retrieving user’s
programs & data at a later time.
Therefore, a computer system that uses disks would have a utility program for initializing or
formatting these disks.
The programming process usually includes debugging (removing errors from) a program.
Statements of the program are studied to determine the cause of an error. Again, useful
information can be obtained by studying the contents of memory at the time the program failed.
(viii). Linker.
(ix). Loader.
(xi). Database management system (DBMS) – a utility program that manages data contents.
Text Editor.
This is a utility program that enables/ allows users to create files in which they can store any
textual information they desire using the computer.
Once the files are created, the Text editor provides facilities which allow the user modify (make
changes to) the files; such as adding, deleting, or changing information in the file. Data can be
copied from one file to another. When a file is no longer needed, it can be deleted from the
system.
The operations of the Text editor are controlled by an interactive OS that provides a “dialogue‟
between the user and the Operating system.
The Text editors are used to create, e.g. program statements through the Keyboard connected to
the computer. Editing can then be carried out using the Edit keys on the Keyboard or by using a
sequence of commands.
(iii). Page text editors - deals with a whole screen full of text at a time.
Note. The Text Editor is probably the most often used utility program of an OS.
Sort utility.
The Sort utility is used to arrange the records within a file according to some predetermined
sequence. The arrangement can either be in Ascending or Descending order of the alphabets or
numerals.
For example, a user may wish to sort data into some desired sequence, such as; sort a student file
into ascending order by name or into descending order by average grade or sort a mailing list by
postal code, etc.
Merge utility.
Merging is the process by which the records in two or more sorted files are brought together into
one larger file in such a way that, the resulting file is also sorted.
The Merge utility is used to influence the combining of the contents of 2 or more input files to
produce one output file.
Copy utility.
It is usually advisable to maintain duplicate copies of the operational files so that in case
something goes wrong with the original files, then their contents can be recreated from the
duplicate/ backup copy or copies.
The duplication process, i.e. copying the contents of one file to another is done through the
influence of the Copy utility. The copying can be from one media to a different media or from
one media to another media of the same make, e.g. from a USB drive to hard disk or from one
USB drive to another USB drive.
Dump utility.
The term Dumping is used to describe the copying of the contents of the main memory. The
Dump utility is therefore, used to transfer (copy) the contents of the computer’s internal memory
into a storage media, e.g. the disk or through the Printer (to get a Hard copy output). The result
of dumping is that the main memory „image‟ is reflected by the stored or the printed contents.
LANGUAGE TRANSLATOR
Programs written in high-level languages have to be translated into binary code (Machine
language), before the computer can run these programs.
A Translator is a utility program written & supplied by the computer manufacturers, used to
convert the Source Codes (the program statements written in any of the computer programming
languages) to Object Codes (their computer language equivalents).
Note. These translators are not part of the OS, but they are designed to be used under the
operating system & are accessible to it.
Linker.
Computer programs are usually developed in Modules or Subroutines (i.e. program segments
meant to carry out the specific relevant tasks).
During the program translation into their machine code, these modules are translated separately
into their object code equivalents.
The Linker is a utility software that accepts the separately translated program modules as its
input and logically combines them into one logical module, known as the Load Module that has
got all the required bits & pieces for the translated program to be obeyed by the computer
hardware.
Loader.
The Loader is a utility program that transfers the load module (i.e. the linker output) into the
computer memory, ready for it to be executed by the computer hardware.
The transfer process is from the backing store, e.g. magnetic disk into the computer’s main
memory. This is because some systems generate object codes for the program, but instead of
being obeyed straight away, they store them into the media.
Diagnostic tools/programs usually come with the translators and are used to detect & correct
system faults –both hardware and software.
They provide facilities which help users to debug (remove errors from) their programs more
easily.
E.g., Dr.Watson is a diagnostic tool from Microsoft that takes a snapshot/ photograph of your
system whenever a system fault occurs. It intercepts software faults, identifies the software that
faulted, and offers a detailed description of the cause & how to repair the fault.
Other diagnostic tools for detecting hardware faults are, Norton Utilities, PC Tools, QAPlus, etc.
Machine language uses machine codes (binary digits) that consist of 0‟s & 1‟s.
The Assembly language instructions are Symbolic representations of the machine code
(computer language) instructions.
Comments can be incorporated into the program statements to make them easier to be
understood by the human programmers.
These are languages developed to solve the problems encountered in low-level programming
languages.
The grammar of High-level languages is very close to the human being’s natural languages
vocabulary, hence easy for the human beings to understand and use.
They allow a problem solution to be specified in a human & problem- oriented manner. The
programs are able to run in any family of computers provided the relevant translator is installed.
Programs written in high-level languages are shorter than their low-level equivalents, since one
statement translates into several machine code instructions.
Examples
* Pascal
* C.
* LOGO
* COROL
(b). Giving examples, name 3 different types of computer programs found on a typical
computer system.
10. What are Text Editors and where are they most commonly used?
(c). Give THREE examples of the most commonly used Presentation Graphics
package.
14. What is Desktop Publishing? How does it differ from Word processing?
15. State one computer software used in industrial systems. Give examples.
(b). State any four devices of a computer that can be classified under Multimedia
devices.
(d). What are the minimum hardware requirements to run multimedia applications?
19. What are Software Suites? Give the advantages of using suites?
22. Name FOUR major application packages. Outline four features of each.
23. List the advantages and disadvantages of Integrated packages/Software Suites over
Standard packages.
Topic 6
Introduction
Application programs are written to solve specific problems (or to handle the needs) of the end-
user in particular areas.
They interface between the user & system programs to allow the user to perform specific tasks.
Application software helps to solve the problems of the computer user, and are therefore said to
be user-oriented.
They are designed specifically to carry out particular tasks. For example, they can be used to
type & create professional documents such as letters, solve mathematical equations, draw
pictures, etc.
NOTES
Application programs can be written by the user, programmers employed by the user, or
by a Software house (a company specializing in writing software).
Application programs can be written with very little knowledge of the hardware details of
a specific computer, and can run on several different computers with little or no
modification.
They are usually pre-written programs made for non-specialists, in the home or business, and
may be used for a wide variety of purposes.
They are off-shelf programs that are developed & supplied by manufacturers, Bureaux &
software houses at a price.
They provide a general set of facilities that are used in dealing with similar types of tasks, which
arise in a wide variety of different application problems.
The range, quality and variety of the packages are continuously changing. Examples of
Application packages are: -
Package - a set of fully described & related programs stored together to perform a specific task.
They are developed to solve particular problems in one or more organizations with little or no
alterations.
(i). Packages save a lot time & programming effort, because the company buys the
software when it is ready-made.
(ii). Are relatively cheap to the user. These programs are usually sold in large numbers.
Again, the cost of developing the programs is effectively shared between the purchases.
(iii). They are appropriate for a large variety of applications. Most packages are menu-driven,
i.e., the user is provided with a set of options displayed on the screen; hence, they are easy to
learn & use, making them suitable for people with little or no computing knowledge.
(iv). Packages are extensively/thoroughly tested & debugged (has all errors corrected), i.e. if
it is a popular package, it is usually tried & approved by a large no. of people. The testing is done
by a pool of professional programmers and analysts.
(v). Are usually provided with extensive documentation to help the user.
(vii). The packages are generally portable. In addition, there is usually a maintenance
agreement between the supplier & the buyer.
(viii). Application packages can be rented, especially by users who might require to use them
only periodically, hence cutting on costs, e.g. maintenance.
(ii). The purchaser has no direct control over the package, because he/she is not involved in
developing it.
(iii). Packages cannot be modified. The user may not be free to correct any routines/ functions
of the package, because there is always a maintenance guarantee & the application of the
developer’s copyright acts.
(iv). A package may include extra facilities, which are not required by an individual user or
company.
(v). Sometimes, the package will allow only a clumsy solution to the task at hand.
(vi). In the case of Spreadsheet or Database, the user must still develop the application, which
requires a thorough knowledge of the capabilities of the package, which are usually quite
extensive.
(vii). The user must still provide documentation for the particular application that he/she has
created.
(viii). It is quite easy to forget the commands to use the package, especially if it is not used
frequently.
They are usually customized (modified/ tailored) programs written by the user or a Software
house under contract, to perform a specific job.
They are developed by users to solve only the specific processing tasks in one organization, and
may not suit the needs of other organizations, hence the name In-house or Tailor-made
programs.
They are designed for a particular identifiable group of users such as Estate agents, farmers,
Hoteliers, etc.
They are usually aimed at providing all the facilities required for particular class of application
problem such as Payroll / Stock control.
Since the programs are occupation- specific; they sell fewer & tend to be more expensive.
(ii). The user is able to quickly implement the results obtained from the use of the package.
(i). Purchaser has direct control over the package, as he is involved in its production.
The following are some of the factors that a buyer who is intending to acquire an application
package should consider: -
1). Cost of the package in relation to the expected benefits against the cost of developing in-
house programs.
2). Compatibility: - (fitting) of the package with/within the existing computer resources, e.g.,
hardware, software, etc.
4). Whether there is accompanying documentation (the descriptions), which helps in using,
maintaining & installing the package.
5). The portability of the package, i.e. whether the package can be used on different families
of computers.
6). A good package is that which is easy to learn & use. This helps to determine the duration
of training that might be involved & the subsequent cost of training.
7). Before buying a particular package, its current users should be interviewed to find out
whether the package is successful and famous in the market.
Quiz
In today’s fast-paced digital age, information has become the most valuable asset for individuals,
organizations, and governments. It is an essential tool for decision-making, communication, and
managing business processes. This has led to the development of Information Systems (IS) – a
field that combines computer science, business management, and information technology to
create systems that support decision-making and business processes. Information systems have
become an essential aspect of modern business operations. They have revolutionized the way
that organizations operate, communicate, and store and analyze data. IS provides platforms for
gathering, processing, and distributing information and have enabled businesses to operate more
efficiently and effectively (Wood, 2024).
An information system is a set of hardware, software, data, procedures, and people that are
organized to collect, process, store, and disseminate information to support decision-making,
coordination, control, analysis, and visualization in an organization (Wood,2024).
Information Systems (IS) are critical to the success of an organization, offering a range of
functions that cannot be performed without them. For example, IS can be used to automate
administrative tasks, analyze data to improve strategic planning, provide real-time decision
support, and facilitate communication and collaboration between employees, customers, and
suppliers.
The field of information systems is broad, covering various domains such as agriculture,
healthcare, finance, education, logistics, marketing, supply chain, and others. In each domain,
information systems are used to provide unique solutions that help solve specific problems and
attain specific goals. See example by Wood, (2024) in figure 7.1(next page). Wood list the
following domains and briefly describes how they apply IS in their daily businesses.
In the late 1990s, the concept of precision agriculture or "wired farms" emerged, encompassing
technologies such as Global Positioning Systems (GPS), Geographical Information Systems
(GIS), and other advanced technologies. Precision agriculture led to a significant increase in data
generation, prompting the development of Decision Support Systems (DSS) tailored for
agriculture. Examples of specialized agricultural DSS include Dairy Comp 305 for managing
milking cow herds and DSSAT for land cultivation planning.
Precision agriculture and the use of AIS have the potential to reduce agricultural production
costs, shorten pre-cultivation times, and enhance agricultural productivity through informed
decision-making. These technologies enable farmers to optimize resource management and
improve crop yields based on precise data and analysis.
Another term that is often used in conjunction with AIS is Farm Management Information
Systems (FMIS). Farmers often do not have access to tools to help them in financial management
of their business. FMIS are information systems that facilitate the storing and processing of
farm-related data providing farmers support in decision making in daily farm management.
FMIS could be a part of AIS, or could also be considered as an extension to AIS as these systems
provide information required not just for agricultural activities but also some of the supporting
activities. Some of the available commercial FMIS, include Agworld, FarmWorks, and
365FarmNet (Narasimha, 2021).
Information Systems (IS) play a critical role in healthcare, for example, from managing patient
records and billing to coordinating services among multiple providers. These systems are integral
to improving patient safety, enhancing care quality, increasing operational efficiency, and
lowering costs within healthcare organizations. Additionally, IS in healthcare support medical
research, clinical trials, and public health initiatives by providing access to vast amounts of data
and information. This data can be analyzed to uncover trends, gain insights, and facilitate
evidence-based decision-making processes. Therefore, IS in healthcare not only streamline
administrative tasks and improve patient care but also serve as invaluable tools for advancing
medical research and public health strategies through data-driven insights and analysis.
In finance, Information Systems (IS) serve a multitude of crucial functions including accounting,
financial reporting, risk management, and investment analysis. These systems are essential for
financial institutions to handle extensive transaction volumes, manage loan processes, and
monitor investment portfolios effectively. Financial IS are typically sophisticated, incorporating
advanced algorithms and analytical tools to manage complex financial operations. They play a
pivotal role in reducing operational risks, enhancing financial transparency, and optimizing
profitability. Moreover, these systems ensure compliance with regulatory requirements, which is
critical for maintaining trust and credibility in the financial markets. Hence, IS in finance are
indispensable for streamlining operations, managing risks, and making informed investment
decisions, thereby contributing significantly to the overall efficiency and stability of financial
institutions.
In education, IS are used to support teaching, learning, and research. Educational institutions use
IS to manage student records, curriculum development, and student information systems. IS in
education are designed to provide access to information, promote collaborative learning, increase
student engagement, and improve learning outcomes.
In education, Information Systems (IS) serve essential roles in supporting teaching, learning, and
research across educational institutions. These systems are utilized to manage various aspects
such as student records, curriculum development, and student information systems.IS in
education are designed with several objectives in mind:
1. Access to Information: They provide easy access to educational resources, course materials,
and administrative information for students, faculty, and staff.
3. Enhancement of Student Engagement: They offer interactive tools and platforms that
enhance student engagement in learning activities, such as virtual classrooms, online
assessments, and multimedia resources.
IS in education play a crucial role in modernizing and enhancing the educational experience by
providing efficient management of educational processes, fostering collaboration and
engagement, and supporting personalized learning approaches. These systems contribute to
creating a more dynamic and effective learning environment for students and educators alike.
In logistics and supply chain management, Information Systems (IS) play a vital role in
effectively managing inventory, tracking shipments, and optimizing transportation routes. These
systems are specifically designed to achieve several key objectives:
3. Route Optimization: IS analyze data to determine the most efficient transportation routes,
considering factors such as distance, traffic conditions, and delivery deadlines. This optimization
helps reduce transportation costs and improve delivery timelines.
4. Supply Chain Visibility: By integrating data from various points in the supply chain, IS
enhance visibility, enabling stakeholders to proactively address disruptions, anticipate demand
fluctuations, and improve overall supply chain responsiveness.
5. Cost Reduction and Efficiency: IS streamline processes, automate routine tasks, and
minimize manual errors, resulting in cost savings and operational efficiency improvements
across the supply chain.
IS in logistics and supply chain management are instrumental in optimizing operations, reducing
costs, increasing efficiency, and ultimately improving customer satisfaction through better
supply chain visibility and management. These systems enable organizations to adapt quickly to
changing market demands and maintain competitive advantage in the global marketplace.
In marketing, information technology plays a crucial role in the creation, distribution, and
promotion of products or services to customers. It enables marketers to leverage data collection
and analysis tools to gain insights into consumer behavior, preferences, and habits, thereby
enhancing the effectiveness and precision of marketing efforts. By utilizing information
technology, marketers can:
2. Data Analysis: Analyze collected data to identify patterns, trends, and correlations that
provide valuable insights into consumer preferences and decision-making processes.
Information technology has led to the evolution of diverse information systems like enterprise
resource planning (ERP), customer relationship management (CRM), and supply chain
management (SCM). These systems are pivotal in enabling organizations to streamline
operations, enhance performance, and bolster competitiveness. These information systems
technologies play a crucial role by:
1. Efficient Operations Management: ERP systems integrate core business processes, such as
finance, human resources, and inventory management, into a unified platform. This integration
enables organizations to operate more efficiently by eliminating redundant tasks and improving
workflow processes.
3. Supply Chain Management: SCM systems optimize the flow of goods and services from
suppliers to customers, enhancing inventory management, logistics planning, and supplier
relationships. These systems improve supply chain visibility, reduce costs, and minimize
disruptions, thereby improving overall operational efficiency.
Successful integration of information systems technology and these information systems can
result in significant benefits for organizations. It enhances operational efficiency by automating
processes, improves productivity through streamlined workflows, and boosts profitability by
optimizing resource allocation and strategic planning. Overall, information systems technology
plays a transformative role in modern organizations, empowering them to adapt to market
dynamics, capitalize on opportunities, and maintain a competitive edge in today's digital
economy.
Information systems have come a long way since their inception in the 1950s. The evolution of
these systems can be traced through various technologies that have been invented, improved, and
adapted over time. From punch cards to modern-day big data analytics, the advancements in
computing and communication technologies have led to the emergence and continual
transformation of information systems.
Information systems first emerged during the 1950s with the introduction of punched cards,
which were used for data storage and processing. This evolved into batch processing systems,
where data was processed in large batches without any real-time feedback. As computing
technologies advanced, interactive systems were introduced where users could interact with the
computer in real-time.
During the 1960s, the concept of a database was introduced, where data could be stored in a
structured manner and retrieved as and when required. This was a significant development that
enabled businesses to store and manage a vast amount of data. The 1970s saw the emergence of
online transaction processing (OLTP) systems, which allowed businesses to process transactions
online in real-time. This marked the beginning of computerized business processes and
revolutionized the way businesses operated.
The 1980s saw the advent of personal computers (PCs), which made computing much more
accessible to individuals and small businesses. This marked the beginning of distributed
computing, where multiple computers could share resources and work together. The introduction
of Local Area Networks (LANs) and Wide Area Networks (WANs) enabled users to
communicate, access data, and share resources across geographically dispersed locations.
During the 1990s, the introduction of the World Wide Web brought about significant changes in
the way people accessed and shared information. This resulted in the development of web-based
information systems that allowed businesses to share information and data with internal and
external stakeholders. The emergence of electronic commerce (e-commerce) made it possible for
businesses to sell products and services online and opened up new markets and revenue streams.
The early 2000s saw the emergence of mobile computing, where users could access information
and services through mobile devices. This led to the development of mobile applications and
mobile information systems that made it possible for businesses to reach customers on the move.
The late 2000s saw the rise of social media, which enabled users to create and share content and
connect with others. This led to the development of new information systems designed to
manage and analyze social media data. Social media platforms such as Facebook, Twitter,
LinkedIn, and Instagram employed advanced algorithms that could track user behavior and target
relevant ads based on their interests and preferences.
In addition, cloud computing became more prominent during this era, providing a new way for
businesses to store and access data. This opened up new possibilities for collaboration and
remote work, allowing teams to access information from anywhere in the world.
Artificial intelligence and machine learning also started to gain widespread use during this
period, particularly in industries such as finance, healthcare, and marketing. These tools allowed
for the analysis of vast amounts of data, helping businesses make more informed decisions based
on insights and patterns.
The present day continues to see advancements in information systems, with the increasing use
of Internet of Things (IoT) devices and the development of blockchain technology. IoT devices,
such as smart sensors and wearables, generate a massive amount of data, which can be leveraged
to improve efficiency, personalize products and services, and reduce costs.
Blockchain technology, on the other hand, offers a secure and decentralized way to record and
store data. It has the potential to transform industries such as banking, supply chain management,
and healthcare by providing transparent and immutable records that can reduce fraud, errors, and
other security risks.
The evolution of information systems has been transformative, enabling businesses and
individuals to access, analyze, and use data in ways that were once unimaginable. As technology
continues to advance, the possibilities for innovation and growth are endless.
In conclusion, the history of business and technology since 1980 has been characterized by rapid
innovation and evolution. As new technologies continue to emerge, businesses must continually
adapt to stay competitive and meet the changing needs of their customers.
The impact of technology on business has been far-reaching and widespread. Here are some of
the important ways that technology has impacted business since 1980:
Increased Efficiency and Productivity: Technology has enabled businesses to automate and
streamline many of their processes, reducing the need for manual labor and increasing
productivity. For example, accounting software can automate bookkeeping tasks, while customer
relationship management (CRM) software can help businesses manage customer interactions
more effectively.
Improved Communication and Collaboration: Technology has made it easier for businesses
to communicate and collaborate with employees, partners, and customers. Email, instant
messaging, and video conferencing technologies have enabled remote communication and
collaboration, while collaborative software tools like Google Drive and Microsoft Teams have
made it easier for teams to work together on projects in real-time, regardless of their physical
location.
Increased Globalization: The internet has enabled businesses to reach a global audience and tap
into new markets. E-commerce sites like Amazon and Alibaba have made it easy for businesses
to sell products worldwide, while online marketplaces like Upwork and Fiverr have made it
possible for businesses to access a global workforce.
Reduced Costs: Technology has enabled businesses to reduce costs in many areas, from supply
chain management to marketing and advertising. By leveraging automation, analytics, and cloud
computing, businesses can minimize their expenses and improve their bottom line.
Modern technology has had a significant impact on various business careers, including finance,
accounting, marketing, operations, and human resources. Here are some examples of how
technology has affected these careers:
Finance: Technology has revolutionized the finance industry by enabling businesses to automate
and streamline financial processes, reduce costs, and improve decision-making. For example,
financial software can automate tasks such as bookkeeping, tax preparation, and financial
analysis. Data analytics tools can help businesses analyze financial data to identify trends and
make informed decisions.
Accounting: Like finance, accounting has seen significant changes due to technology.
Accounting software can automate repetitive tasks such as data entry and reconciliation,
improving accuracy and efficiency. Cloud-based accounting systems allow businesses to access
their data from anywhere and collaborate with their accountants or bookkeepers in real-time.
Marketing: Technology has had a profound impact on marketing, opening up new channels of
communication and enabling businesses to reach wider audiences. Social media platforms like
Facebook, Twitter, and Instagram have provided businesses with new ways to connect with
customers and build brand awareness. Data analytics tools allow businesses to analyze customer
behavior and preferences, creating opportunities for personalized marketing campaigns.
Operations: Technology has played a crucial role in improving operational efficiency and
reducing costs. Automation technologies, such as robotics and artificial intelligence (AI), can
streamline manufacturing processes, improve product quality, and reduce waste. Advanced
analytics tools can help businesses optimize production schedules and supply chain management.
Human Resources: Technology has enabled HR departments to automate many administrative
tasks, such as payroll processing and benefits administration. Online recruiting platforms, such
as LinkedIn and Glassdoor, have made it easier for companies to find and attract qualified
candidates. Data analytics tools can help HR departments identify skill gaps and identify
opportunities for employee development.
In conclusion, modern technology has had a significant impact on various business careers,
improving efficiency, reducing costs, and enabling businesses to make informed decisions. As
technology continues to evolve, businesses must continue to adapt to stay competitive and meet
the changing needs of their customers.
Information technology careers encompass a wide range of job opportunities, from software
development and programming to cybersecurity and network administration. Here are some
examples of modern IT careers:
Software Development: Software developers create and maintain software programs that
organizations use to perform various tasks. They work with programming languages such as
Java, Python, and C++ to develop applications for desktop and mobile devices.
Data Analytics: Data analysts work with large sets of data to identify patterns and trends, which
organizations can use to make informed decisions. They use tools such as SQL, Python, and R to
extract, transform, and analyze data from various sources.
These are just a few examples of the wide range of IT careers available in today’s job market. As
technology continues to evolve, there will be a growing demand for skilled IT professionals who
can adapt to new technologies and help organizations meet their business goals.
An Information System is a set of interconnected elements and technologies that collect, process,
store, and disseminate data and information. It is a combination of hardware, software, data,
people, and processes that work together to create a system that supports the goals of an
organization. An Information System has several component parts, including:
Hardware: This consists of physical components such as computers, servers, routers, switches,
printers, scanners, and other peripheral devices that are used to process, store, and output data.
Software: This refers to the programs and applications that are used to manipulate and control
data in an information system. Software can be divided into two categories: system software and
application software. System software controls the hardware and provides a platform for the
application software to run. Application software is used for specific tasks such as word
processing, accounting, inventory management, and so on
Data: These are raw facts or figures that are unorganized and meaningless on their own. Data
can come in various forms such as text, numbers, images, audio or video. Data is stored and
processed by the Information System. It can be structured or unstructured, and may include
customer data, financial records, inventory, and other critical information.
People: This includes all the individuals who interact with the Information System, from users
who input data to IT professionals who maintain and manage the system.
Processes: This refers to the procedures and protocols that guide the use of the system, including
security measures, backup procedures, and data management policies.
Together, these components work together to collect, process, store and disseminate information
to the end user. This information can then be used for decision making, analysis, reporting or
other business functions.
Before we look at each of this information systems, first, let’s understand different level of
decision making of an organization. Always, remember that the key objective of IS is to aid in
decision making. Figure 7.2 shows the decision making levels of an organization. See also table
7.1 for the types of decisions made at each level
Summary of the decisions taken at different levels in the organization, see table 7.2
Whereas individuals use business productivity software such as word processing, spreadsheet,
and graphics programs to accomplish a variety of tasks, the job of managing a company’s
information needs falls to management information systems: users, hardware, and software that
support decision-making. Information systems collect and store the company’s key data and
produce the information managers need for analysis, control, and decision-making (Williams,
2023).
Figure 7.4 shows the relationship between transaction processing and management support
systems as well as the management levels they serve. Now let’s take a more detailed look at how
companies and managers use transaction processing and management support systems to manage
information.
A company's integrated information system begins with its transaction processing system (TPS).
The TPS collects raw data from both internal and external sources and organizes this data for
storage in a database, which resembles a microcomputer database but on a much larger scale.
Essentially, all crucial company data is housed within a single extensive database, serving as the
central information hub. As mentioned earlier, the database management system oversees the
data, enabling users to retrieve specific information through queries.
The database can be updated through two methods: batch processing, where data is collected
over a period and processed together, and online (or real-time) processing, which handles data as
it becomes available. Batch processing is highly efficient in utilizing computer resources and is
ideal for tasks like periodic payroll processing rather than continuous operations. Online
processing ensures that a company's data remains current. For instance, when you make an
airline reservation, the information is immediately entered into the airline's system, and you
promptly receive confirmation, usually via email. However, online processing tends to be
costlier compared to batch processing, so companies must evaluate the cost versus the benefits.
For example, a factory operating 24/7 might opt for real-time processing for inventory and other
time-sensitive needs but may process accounting data in nightly batches. Figure 7.3 illustrates a
typical example of a TPS.
A transaction is an event that generates or modifies data. TPS is used at Operational level of the
organization. It processes business events and transactions to produce reports. The main
objective is to automate repetitive information processing activities within organizations, which
in turn increases speed, increases accuracy, greater efficiency, supports the monitoring,
collection, storage, processing, and dissemination of the organization’s basic business
transactions. Most of the time, TPS includes accounting and financial transactions which are
mainly used for providing other information systems with data. Other examples include Payroll
processing, Sales and order processing, Inventory management, Accounts payable and receivable
systems etc. Organizations use TPS because it enhances efficient and effective operation of the
organization, provide timely documents and reports, increases the competitive advantage,
provides necessary data for tactical and strategic systems such as DSS and also provide a
framework for analyzing an organization’s activities
Transaction processing systems (TPS) automate repetitive operational tasks like accounting and
order processing, ensuring efficiency and accuracy in daily business operations. On the other
hand, Management Support Systems (MSS), also known as Management Information Systems
(MIS), utilize data from TPS to generate reports and analyses that aid managers in making
informed decisions, thereby supporting strategic planning and operational control within the
organization.
Companies use data warehouses to gather, secure, and analyze data for many purposes, including
customer relationship management systems, fraud detection, product-line analysis, and corporate
asset management. Retailers might wish to identify customer demographic characteristics and
shopping patterns to improve direct-mailing responses. Banks can more easily spot credit-card
fraud, as well as analyze customer usage patterns.
Williams gave an example from Forrester Research, stating that about 60 percent of companies
with $1 billion or more in revenues use data warehouses as a management tool. Union Pacific
(UP), a $19 billion railroad, turned to data warehouse technology to streamline its business
operations. By consolidating multiple separate systems, UP achieved a unified supply-chain
system that also enhanced its customer service. “Before our data warehouse came into being we
had stovepipe systems,” says Roger Bresnahan, principal engineer. “None of them talked to each
other. . . . We couldn’t get a whole picture of the railroad”, (Williams, 2023)
According to Williams, UP’s data warehouse system took many years and the involvement of 26
departments to create. The results were well worth the effort: UP can now make more accurate
forecasts, identify the best traffic routes, and determine the most profitable market segments. The
ability to predict seasonal patterns and manage fuel costs more closely has saved UP millions of
dollars by optimizing locomotive and other asset utilization and through more efficient crew
management. In just three years, Bresnahan reports, the data warehouse system had paid for
itself.
Exception reports are another important output of the information-reporting system. These
reports highlight instances that deviate from established norms or standards. For instance, an
accounts receivable exception report might list customers with overdue accounts, enabling
collection personnel to prioritize their efforts effectively. Special reports are generated on request
from managers to address specific inquiries or issues. For instance, a special report detailing
sales figures by region and customer type could help identify reasons behind a recent decline in
sales, offering insights for strategic decision-making.
Information-reporting system within an MSS uses data from TPS to produce reports that range
from detailed operational breakdowns to high-level summaries, as well as exception reports and
ad-hoc analyses requested by management. This functionality supports informed decision-
making across different levels of the organization.
Therefore, some characteristics of MSS include the fact that it is based on internal information
flows, support relatively structured decisions, inflexible and have little analytical capacity, used
by lower and middle managerial levels, deals with the past and present rather than the future and
efficiency oriented. Other examples include sales management systems, inventory control
systems, budgeting systems, Management Reporting Systems (MRS) and Personnel (HRM)
systems among others. In conclusion, MIS, provide summary information of organizational
activity at periodical intervals, operational control and efficiency, focus on internal information
and it is useful to structured decisions, see table 7.3 for the summary of MIS.
A Decision Support System (DSS) aids managers in making informed decisions by employing
interactive computer models that simulate real-world processes. This system leverages data from
internal databases but focuses specifically on relevant information pertaining to the specific
issues being addressed. It serves as a tool for exploring "what-if" scenarios to assess the potential
outcomes of managerial decisions.
In straightforward cases, managers can use tools like spreadsheets to manipulate variables and
observe the resulting impacts. For example, a manager might create a spreadsheet to analyze how
changes in workforce size affect overtime requirements. For more complex scenarios, DSS
utilizes models where managers input relevant data parameters into the computer. The system
then computes and generates results based on these inputs. For instance, marketing executives in
a furniture company could employ DSS models that utilize sales data and demographic
projections to forecast the demand for different types of furniture among rapidly growing
demographic segments.
Table 7.4 provides a concise overview of the capabilities and functionalities of DSS, detailing its
role in facilitating decision-making through computational modeling and scenario analysis.
Companies can use a predictive analytics program to improve their inventory management
system and use big data to target customer segments for new products and line extensions. Figure
7.5 depict a typical DSS,
Hence, some of the key attributes of DSS include; support ill- structured or semi-structured
decisions, have analytical and/or modelling capacity, used by more senior managerial levels, are
concerned with predicting the future, and are effectiveness oriented. Some examples of DSS
include, Group Decision Support Systems (GDSS), Computer Supported Co-operative work
(CSCW), Logistics systems, Financial Planning systems and Spreadsheet Models among others.
Table 7.5 lists key differences between MIS and DSS,
An Executive Information System (EIS) is similar to a Decision Support System (DSS) in that it
assists executives in making strategic decisions, but it is specifically customized for individual
executives. These systems provide tailored information that is crucial for high-level decision-
making. For example, a CEO's EIS might include specialized spreadsheets that present financial
metrics comparing the company with its main competitors, along with graphs illustrating current
economic and industry trends. These tools help executives and senior managers analyze the
business environment, identify long-term trends, and devise appropriate strategies.
The information within EIS is often less structured compared to data in transactional systems,
and it originates from both internal sources (such as financial databases) and external sources
(like market research reports). EIS are designed to be user-friendly and are operated directly by
executives, eliminating the need for intermediaries. They can be easily customized to align with
the preferences and information needs of each individual executive user. In summary, while DSS
are broader decision-making tools used across different managerial levels, EIS are specifically
tailored for executives to provide them with timely, relevant, and customized information to
support strategic decision-making and planning.
Among the attributes of EIS are; concerned with ease of use, concerned with predicting the
future, effectiveness oriented, highly flexible, support unstructured decisions, use internal and
external data sources, used only at the most senior management levels. Table 7.6 summarizes the
functions of EIS
An Expert System (ES) is akin to receiving advice from a human consultant, utilizing artificial
intelligence to extract knowledge from existing information, theories, beliefs, and experiences of
managers across different business activities. It mimics expert judgment by following predefined
sets of rules that experts would typically use. This enables computers to reason and learn,
applying what-if scenarios similar to human thought processes. Despite being costly and
challenging to develop, expert systems are increasingly adopted by companies as their
applications expand. They prove beneficial in diverse fields such as medical diagnosis, portfolio
management, and credit assessment. Some lower-end systems can even operate on mobile
devices, while advanced versions assist airlines in efficiently deploying aircraft and crews,
crucial for their operations. The expense of hiring personnel to perform these ongoing analytical
tasks would be prohibitive. Expert systems have also been employed in oil exploration,
employee scheduling, and medical diagnostics, sometimes replacing human experts and other
times aiding them.
4. If applicant has two or more years of experience, then the applicant has required experience
And so on…
The user enters the above information. The inference engine uses the rules to evaluate the data
entered. In this example, if the user enters education as B.Com and experience as four years, then
the inference engine would determine that the applicant has the requisite education and
experience as per Rule 3 and Rule 4 respectively. Therefore, as per Rule 1 and 2, the applicant
should be hired. This recommendation is the output of the system
Example 2
Another example of an Expert System (ES) is in medical diagnosis. These systems utilize
artificial intelligence to replicate the reasoning and decision-making processes of human experts
in specific domains. For instance, in healthcare, an expert system can analyze patient symptoms,
medical history, and test results to suggest potential diagnoses and recommend appropriate
treatments.
Data Input: The system collects and inputs patient data such as symptoms, medical
history, and laboratory results.
Knowledge Base: It accesses a comprehensive database of medical knowledge, including
symptoms, diseases, treatments, and their relationships.
Inference Engine: Using artificial intelligence techniques, the system applies logical
rules and algorithms to interpret the input data and make informed decisions.
Output: Based on its analysis, the system generates diagnostic hypotheses, ranks them
based on probability, and recommends further diagnostic tests or treatments.
Feedback and Learning: Some expert systems can learn from feedback provided by
doctors and patients, improving their accuracy over time.
Expert systems in medical diagnosis not only assist healthcare professionals by providing timely
and accurate diagnostic support but also help in reducing diagnostic errors and improving patient
outcomes. They are an example of how artificial intelligence can augment human expertise in
complex decision-making tasks across various industries.
Spreadsheet programs
Text and image processing systems
Presentation packages
Personal database systems
Note-taking systems
In the modern digital era, information has emerged as a crucial asset for individuals, businesses,
and governments, driving the development of Information Systems (IS). IS integrates computer
science, business management, and information technology to support decision-making and
business processes, revolutionizing operations by providing platforms for data collection,
processing, and distribution. The importance of IS spans across various industries, including
agriculture, healthcare, finance, education, and marketing, where they enable organizations to
operate more efficiently, improve decision-making, and enhance customer experiences through
the use of data.
Information Systems have evolved significantly since their inception in the 1950s, beginning
with punch cards for data processing and evolving into complex systems that manage vast
amounts of data and support real-time transactions. The advancements in computing
technologies have introduced various specialized systems, such as Enterprise Resource Planning
(ERP), Customer Relationship Management (CRM), and Supply Chain Management (SCM),
which have become integral to the efficiency and competitiveness of businesses. The
development of the internet, mobile computing, and cloud technologies further transformed IS,
allowing for global connectivity, data accessibility, and the rise of new business models like e-
commerce and social media-driven marketing.
The impact of IS on business has been profound, leading to increased efficiency, productivity,
and globalization. These systems have automated many processes, enabling better
communication, data analysis, and decision-making across all levels of an organization. Careers
in finance, accounting, marketing, operations, and human resources have also been transformed
by IS, requiring professionals to adapt to new tools and technologies. As IS continues to evolve
with innovations like artificial intelligence, IoT, and blockchain, the potential for further
transformation in business operations and strategy remains vast, driving the need for ongoing
adaptation and innovation.
Executive Information Systems (EIS) are specialized for senior executives, offering tailored
information that supports high-level decision-making. Unlike DSS, which is used across various
management levels, EIS is designed specifically for the needs of top executives, providing them
with tools to analyze the business environment and devise long-term strategies. These systems
combine both internal and external data sources and emphasize ease of use, flexibility, and
effectiveness. Another advanced type of information system is the Expert System (ES), which
uses artificial intelligence to replicate expert-level decision-making in fields such as medical
diagnosis and credit assessment. ES leverages a knowledge base and inference engine to provide
recommendations, often surpassing human capabilities in efficiency and accuracy.
Lastly, Office Automation Systems (OAS) enhance productivity by integrating technologies that
streamline office tasks. These systems include tools like spreadsheets, text processing, and
presentation software, which improve efficiency in document preparation, data analysis, and
communication. OAS aims to optimize clerical and administrative work, thereby increasing
overall managerial productivity. Together, these diverse types of information systems enable
businesses to operate more effectively, support decision-making at all levels, and maintain a
competitive edge through enhanced operational efficiency.
Topic 8
Network Terminologies
Network: connection of more than one computer with the main purpose of sharing computer
resources.
Intranet: Internal corporate network that uses the infrastructure of the Internet and the www.
Extranet: an extension of internal network (internet) to connect not only internal personnel but
also selected customers, suppliers, and other strategic offices.
Packet: fixed-length block of data for transmission. It also contains instructions about the
destination of the packet.
Kilobits per second (kbps): 1000 bits per second; an expression of data transmission speeds.
Network: connection of more than one computer with the main purpose of sharing computer
resources
Network Topology refers to the manner in which network devices are organized.
Network Protocol are common sets of rules and signals that specify how computers on a
network communicate.
Internet: - global connection of computers using TCP/IP protocol for the purpose of
communication and / or connecting different networks
COMPUTER NETWORKING
Network Hardware
This involves the hardware components associated with networking namely:
i) Network Interface Card (NIC): - A network interface is a device that connects a client
computer, server, printer or other component to your network. NIC consists of a small electronic
circuit board that is inserted into a slot inside a computer or printer. There are two types
a) Physical NIC
b) Wireless NIC
iii) It converts information on your computer to and from electrical signals for your network.
iv) Unique MAC (MEDIA Access Control) address helps route information within your local
area network and is used by switches and bridges.
v) Twisted Pair Wire: - type of communication channel consisting of two strands of insulated
copper wire, twisted around each other in pairs.
vi) Hubs/Repeaters: -used to connect together two or more network segments of any media type.
Hubs provide the signal amplification required to allow a segment to be extended a greater
distance. While repeaters allow LANs to extend beyond normal distance limitations, they still
limit the number of nodes that can be supported.
Function of Hubs/repeater
i) Amplification helps to ensure that devices on the network receive reliable information.
iii) Improves the performance by dividing the network into segments thus reducing the numbers
of computers per segment.
vii) Bridges: - Bridges became commercially available in the early 1980s. At the time of their
introduction their function was to connect separate homogeneous networks. Subsequently,
bridging between different networks e.g. Ethernet and Token Ring - has also been defined and
standardized. Bridges are data communications devices that operate principally at Layer 2 of the
OSI reference model. As such, they are widely referred to as data link layer devices.
viii) Routers: - Routers use information within each packet to route it from one LAN to
another, and communicate with each other and share information that allows them to determine
the best route through a complex network of many LANs.
ix) Switches: - LAN switches are an expansion of the concept in LAN bridging. They operate at
Layer 2 (link layer) of the OSI reference model, which controls data flow, handles transmission
errors, provides physical (as opposed to logical) addressing, and manages access to the physical
medium.
x) Firewall: A firewall is a network security device that monitors incoming and outgoing
network traffic and decides whether to allow or block specific traffic based on a defined set of
security rules. They establish a barrier between secured and controlled internal networks that can
be trusted and untrusted outside networks, such as the Internet. A firewall can be hardware,
software, or both
xi) Network Access Point (hotspot): An access point is a device that creates a wireless local
area network, or WLAN, usually in an office or large building. An access point connects to a
wired router, switch, or hub via an Ethernet cable, and projects a Wi-Fi signal to a designated
area serves to join or "bridge" wireless clients to a wired Ethernet network. centralize all WiFi
clients on a local network in so-called "infrastructure" mode. may connect to another access
point, or to a wired Ethernet router. Create one WLAN that spans a large area. Each access point
typically supports up to 255 client computers
Network Topology
Define the manner in which network devices are organized. Four common LAN topologies exist:
The term physical topology refers to the way in which a network is laid out physically. Two or
more devices connect to a link; two or more links form a topology. The topology of a network is
the geometric representation of the relationship of all the links and linking devices (usually
called nodes) to one another. There are four basic topologies possible: mesh, star, bus, and ring
which are shown in the following figure.
In a mesh topology, every device has a dedicated point-to-point link to every other device. The
dedicated link carries traffic only between the two devices it connects. The number of physical
links needed in a fully connected mesh network with n nodes are, n(n - 1). However, if each
physical link allows communication in both directions (duplex mode), we can divide the
number of links by 2. In other words, we can say that in a mesh topology, we need n(n -1) /2
duplex-mode links. To accommodate that many links, every device on the network must have n –
1 input/output (I/O) ports to be connected to the other n - 1 stations which are shown in the
following figure:
Advantages:
i) The dedicated links guarantees that each connection can carry its own data load, thus
eliminating the traffic problems that can occur when links must be shared by multiple devices.
ii) A mesh topology is robust. If one link becomes unusable, it does not incapacitate the entire
system.
iii) Another advantage of Mesh topology is advantage of privacy or security. When every
message travels along a dedicated line, only the intended recipient sees it. Physical boundaries
prevent other users from gaining access to messages.
iv) Point-to-point links make fault identification and fault isolation easy. Traffic can be routed to
avoid links with suspected problems. This helps to discover the precise location of the fault and
aids in finding its cause and solution.
i) Every device must be connected to every other device. So large amount of cabling and the
number of I/O ports are required. So, the installation and reconnection are difficult.
ii) The sheer bulk of the wiring can be greater than the available space (in walls, ceilings, or
floors) can accommodate.
iii) The hardware required to connect each link (I/O ports and cable) can be prohibitively
expensive.
In a star topology, each device has a dedicated point-to-point link only to a central controller,
usually called a hub. The devices are not directly linked to one another. Unlike a mesh topology,
a star topology does not allow direct traffic between devices. The controller acts as an exchange:
If one device wants to send data to another, it sends the data to the controller, which then relays
the data to the other connected device as shown in the following Figure.
Advantages:
i) A star topology is less expensive than a mesh topology. In a star, each device needs only one
link and one I/O port to connect it to any number of others.
ii) A star topology is robust. Robustness. If one link fails, only that link is affected. All other
links remain active. This factor also lends itself to easy fault identification and fault isolation.
Disadvantages:
i) One big disadvantage of a star topology is the dependency of the whole topology on one
single point, the hub. If the hub goes down, the whole system is dead.
Although a star requires far less cable than a mesh, each node must be linked to a central hub.
For this reason, often more cabling is required in a star than in some other topologies (such as
ring or bus).
The preceding examples all describe point-to-point connections. A bus topology, on the other
hand, is multipoint. One long cable acts as a backbone to link all the devices in a network which
is shown in the following figure.
Nodes are connected to the bus cable by drop lines and taps. A drop line is a connection running
between the device and the main cable. A tap is a connector that either splices into the main
cable or punctures the sheathing of a cable to create a contact with the metallic core. As a signal
travels along the backbone, some of its energy is transformed into heat. Therefore, it becomes
weaker and weaker as it travels farther and farther. For this reason there is a limit on the number
of taps a bus can support and on the distance between those taps.
Advantages:
i) The main advantages of a bus topology is ease of installation. Backbone cable can be laid
along the most efficient path, then connected to the nodes by drop lines of various lengths.
Disadvantages:
i) The disadvantage of bus topology is difficult reconnection and fault isolation. A bus is
usually designed to be optimally efficient at installation. It can therefore be difficult to add new
devices. Signal reflection at the taps can cause degradation in quality.
ii) A fault or break in the bus cable stops all transmission, even between devices on the same
side of the problem. The damaged area reflects signals back in the direction of origin, creating
noise in both directions.
In a ring topology, each device has a dedicated point-to-point connection with only the two
devices on either side of it. A signal is passed along the ring in one direction, from device to
device, until it reaches its destination. Each device in the ring incorporates a repeater. When a
device receives a signal intended for another device, its repeater regenerates the bits and passes
them along. A typical ring topology is as shown in the figure.
Advantages:
i) A ring is relatively easy to install and reconfigure. Each device is linked to only its
immediate neighbors (either physically or logically). To add or delete a device requires changing
only two connections.
ii) A signal is circulating at all times (token) if one device does not receive a signal within
specified period, it can issue an alarm. The alarm alerts the network operator to the problem and
its location
Disadvantages:
A network can be hybrid. It compose of combination of more than one type of topologies. For
example, we can have a main star topology with each branch connecting several stations in a bus
topology as shown in the following figure.
Mostly this type topology is practically used in real working network. It is impossible to
implement only one type of topology in practical working network.
Network Protocol
Definitions of common sets of rules and signals that specify how computers on a network
communicate.
Protocols are rules or guidelines that regulate the following characteristics of a network: access
method, allowed physical topologies, types of cabling and speed of data transfer. Protocols can
be implemented either in hardware or software or a mixture of both the lower layers is
implemented in hardware, with the higher layers being implemented in software as follows:-
ii) Protocols may determine packet size, information in the headers, and how data is stored in
the packet.
iii) Both sides of the conversation must understand these rules for a successful transmission.
iv) Most protocols actually consist of several protocols grouped together in a suite.
vi) Protocols may determine packet size, information in the headers, and how data is stored in the
packet.
vii) Both sides of the conversation must understand these rules for a successful transmission.
Functions of protocols
i) Data sequencing. It refers to breaking a long message into smaller packets of fixed size.
Data sequencing rules define the method of numbering packets to detect loss or duplication of
packets, and to correctly identify packets, which belong to same message.
ii) Data routing. Data routing defines the most efficient path between the source and
destination.
iii) Data formatting. Data formatting rules define which group of bits or characters within
packet constitute data, control, addressing, or other information.
iv) Flow control. A communication protocol also prevents a fast sender from overwhelming a
slow receiver. It ensures resource sharing and protection against traffic congestion by regulating
the flow of data on communication lines.
v) Error control. These rules are designed to detect errors in messages and to ensure
transmission of correct messages. The most common method is to retransmit erroneous message
block. In such a case, a block having error is discarded by the receiver and is retransmitted by the
sender.
vi) Precedence and order of transmission. These rules ensure that all the nodes get a chance to
use the communication lines and other resources of the network based on the priorities assigned
to them.
vii) Connection establishment and termination. These rules define how connections are
established, maintained and terminated when two nodes of a network want to communicate with
each other.
viii) Data security. Providing data security and privacy is also built into most communication
software packages. It prevents access of data by unauthorized users.
ix) Log information. Several communication software are designed to develop log information,
which consists of all jobs and data communications tasks that have taken place. Such information
may be used for charging the users of the network based on their usage of the network
resources.
Ethernet and Token Ring are examples of network cabling standards, whilst TCP/IP is the
predominant network communications protocol. Other most used protocols include:
i) TCP/IP: - Short for Transport Control Protocol/ Internet Protocol where Transmission
Control Protocol ensures the reliability of data transmission across Internet connected networks
and Internet Protocol standard dictates how packets of information are sent out over networks.
ii) NetBEUI: - Short for NETBios Enhanced User Interface and is used by network operating
systems allowing the computer to communicate with other computers utilizing the same
protocol.
iii) DHCP: - Short for Dynamic Host Configuration Protocol, DHCP is a protocol used to
assign an IP address to a device connected to a network automatically.
iv) HTTP: - Short for HyperText Transfer Protocol, HTTP is a set of standards that let users of
the World Wide Web, to exchange information found on web pages.
v) FTP: - Short for File Transfer Protocol is a standard way to transfer files between
computers.
vi) PPP: - Short for Point-to-Point Protocol, PPP is a communication protocol that enables a
user to utilize their dialup connection (commonly a modem) to connect to other network
protocols like TCP/IP etc.
Types of Network
i) Wide Area Network (WAN): - communications network that covers a wide geographical
are, such as a state or a country. E.g internet
ii) Metropolitan Area Network (MAN): - communication network covering a geographic area
the size of a city.
iii) Local Area Network (LAN): - privately owned communication network that servers users
within a confined geographical area.
1. Client/Server LAN
2. Peer to peer LAN
1. Centralization: A single server that houses all of the essential data in one location
makes data security and user authorization and authentication control much easier. Any
issue that arises throughout the whole network may be resolved in a single location.
2. Scalability: A client-server network may be expanded by adding network segments,
servers, and PCs with little downtime. Client-server networks offer scalability. The
number of resources, such as clients and servers, can be increased as needed by the user.
Consequently, the server's size may be increased without any disruptions. Since the
server is centralized, there are no questions regarding access to network resources even as
the size grows. As a result, just a small number of staff members are needed for the
setups.
3. Easy Management: Clients and the server do not have to be close to access data
effectively. It is really simple to handle files because they are all kept on the same server.
The finest management for tracking and finding records of necessary files is offered in
client-server networks.
4. Accessibility: The client-server system's nodes are all self-contained, requesting data
only from the server, allowing for simple upgrades, replacements, and relocation.
5. Data Security: The centralized design of a client-server network ensures that the data is
properly safeguarded. Access controls can be used to enforce it and ensure that only
authorized users are allowed access. Imposing credentials like a username and password
is one such technique. Additionally, if the data were to be destroyed, it would be simple
to restore the files from a single backup.
2. All the resources and contents are shared by all the peers, unlike server-client architecture
where Server shares all the contents and resources.
3. P2P is more reliable as central dependency is eliminated. Failure of one peer doesn’t affect
the functioning of other peers. In case of Client –Server network, if server goes down whole
network gets affected.
4. There is no need for full-time System Administrator. Every user is the administrator of his
machine. User can control their shared resources
1. In this network, the whole system is decentralized thus it is difficult to administer. That is
one person cannot determine the whole accessibility setting of whole network.
2. Security in this system is very less viruses, spywares, trojans; etc malwares can easily
transmit over this P-2-P architecture.
3. Data recovery or backup is very difficult. Each computer should have its own back-up
system
Advantages of Networks
Disadvantages of Networks
i) The main disadvantage of networks is that users become dependent upon them. For example,
if a network file server develops a fault, then many users may not be able to run application
programs and get access to shared data. To overcome this, a back-up server can be switched into
action when the main server fails. A fault on a network may also stop users from being able to
access peripherals such as printers and plotters. To minimize this, a network is normally
segmented so that a failure in one part of it does not affect other parts.
ii) Another major problem with networks is that their efficiency is very dependent on the skill
of the systems manager. A badly managed network may operate less efficiently than non-
networked computers.
iii) Also, a badly run network may allow external users into it with little protection against them
causing damage. Damage could also be caused by novices causing problems, such as deleting
important files.
1. If a network file server develops a fault, then users may not be able to run application
programs
2. A fault on the network can cause users to loose data (if the files being worked upon are
not saved)
3. If the network stops operating, then it may not be possible to access various resources
4. Users work-throughput becomes dependent upon network and the skill of the systems
manager
5. It is difficult to make the system secure from hackers, novices or industrial espionage
6. Decisions on resource planning tend to become centralized, for example, what word
processor is used, what printers are bought, e.t.c.
7. Networks that have grown with little thought can be inefficient in the long term.
8. As traffic increases on a network, the performance degrades unless it is designed properly
9. Resources may be located too far away from some users
10. The larger the network becomes, the more difficult it is to manage.
i) Transmission rate i.e. frequency (the amount of data that can be transmitted on a channel)
and bandwidth (the difference between the highest and lowest frequencies).
ii) Line configurations i.e. point to point (line directly connects the sending and receiving
devices, such as a terminal with a central computer) vs. multipoint (a single line that
interconnects several communication devices to one computer).
iii) Serial (bits are transmitted sequentially, one after the other) vs. Parallel Transmission (bits
are transmitted through separate lines simultaneously).
iv) Direction of transmission i.e. simplex (data can travel in only one direction), half-
duplex (data travels in both directions but only in one direction at a time), and full duplex (data is
transmitted back and forth at the same time).
v) Transmission mode i.e. asynchronous (data is sent one byte or character at a time)
vs. synchronous (data is sent in blocks).
vi) Packet switching i.e. a technique for dividing electronic messages into packets for
transmission over a wide area network to their destination through the moist expedient route.
Protocols i.e. set of conventions governing the exchange of data between hardware and /or
software component in a communication network.
THE INTERNET
Internet Terminologies
Internet: - global connection of computers using TCP/IP protocol for the purpose of
communication and / or connecting different networks
IP: - Short for Internet Protocol, the IP is an address of a computer or other network device on a
network using IP or TCP/IP.
WWW: -interconnected system of sites, or servers, of the Internet that store information in
multimedia form and share and a hypertext form that link similar word or phrases between sites.
Hypertext Markup Language (HTML): - set of instructions, called tags or markups that are
used for documents on the web to specify document structure, formatting and link the document.
Web browser: - software that translates HTML documents and allow a user to view a remote
web page e.g. Internet explorer.
Web page: -document in hypertext markup language (HTML), that is on a computer connected
to the Internet.
Web site: - Internet location of a computer or server on which a hyperlinked document (web-
page) is stored
Web Surfing: - a user’s action of moving from one web page to another by using the computer
mouse to click on the hypertext links.
Search Engines: - type of search tool that allows the use to find specific document through
keyword searches or menu choices.
Uniform Resource Locator (URL): - address those points to specific resource on the web.
Electronic Mail (E-MAIL): - system in which computer users, linked by wired or wireless
communication lines, may use their keyboard to post messages and their display screens to read
responses.
Internet Service Provider (ISP): - local or national company that provides unlimited public
access to the Internet and the web.
Wired Internet connection this involves the use of physical cabling to and can be achieved
through:
§ Dial-up connection – this involves the use of telephone lines to dial in to ISPs network for
Internet access.
§ Dedicated link access connection – this involve the use of permanent Telkom line for an
uninterrupted Internet access. The connection can be analog (i.e. use of modems) or digital (i.e.
use of DSL, Router/DTU, Web-ranger etc.).
Wireless Internet connection this involves non-usage of physical cabling and can be achieved
through:
§ Satellite connection – this involve transmitting packets by use of satellite dish for both
downloading and uploading.
§ Infrared Rays Transmission – this involves use of infrared rays to transmit packets.
§ Communication – through the use of e-mail, chat one can send and receive information.
§ E-commerce – online transaction enables a user to shop, order, and even pay for an item by
using electronic cards.
§ Education (E-learning) – virtual learning centers depend on the Internet to transmit
educational materials and lectures. Research materials are also available on the net.
§ Entertainment – Internet make it possible for users to enjoy games, irrespective of distance.
Music and video sites also supplement entertainment.
§ Video – conferencing – Educational fora also uses the Internet to hold meetings and hence
effect discussion and sharing of views and experiences.
§ Networking – through the use of VPN (Virtual Private Networks e.g. IPsec) technology,
several networks can be connected to one another. Remote administration is also possible.
There are positive and negative impacts associated with the Internet.
Positive:
§ Source of employment
Negative:
§ Hackers, Crackers and/or Trojans breach the goodwill of data privacy and confidentiality.
A) Review Questions
b) Impacts of information technologies in any developing are positive and negatives. Outline
the possible impacts of computers to
i) Individual,
ii) Organization
iii) Society.
c) Normally users configure mail client software with the server names instead of IP address.
Explain
d) The World Wide Web (WWW) is a global system of connected web pages that provides
services to its users. Explain THREE internet services provided on the web.
e) Discuss the role of Web browser and give THREE examples of web browsers.
f)
i) What is a topology?
iii) Describe start and mesh topologies and outline advantages and disadvantages of each
iv) With the aid of a diagram explain the difference between the hierarchical topology and
the ring topology.
g) An upcoming organization uses a lot of files in keeping records; the organization wants to
break down the paperwork and automate the basic routine jobs by interconnecting computers in
the organization. What will be the business benefits of networking this organization?
h) Your institution plans to establish a Distance Learning Education program to provide High
Education learning opportunities to more Kenyans, using a computer network. Advise on the
following:-
i) The most suitable computer network the institution and the learners will require;
ii) The hardware components the institution and the learners will require
iii) The software tools the institution and the learners will require
i) The ministry of lands operates a conventional file system in storing details for the clients,
the ministry is trying to automate the system by interconnecting different networking devices
including hosts, routers, switches, cables, trunking ports and configuring the same network.
ii) Hosts
b) What will be the business benefits of networking this ministry
j) The internet is so popular nowadays that almost anyone uses it. It is accessible by almost any
person who tries to connect to one of its central, main networks. Moreover, it can be accessed by
users of any age and condition. This means that the internet has allowed the interchange of ideas
and materials among scientists, university professors, and students, in addition to provide
servers, resource centers and online tools for their research and scholar activities. While this
seems very important an internet user can also experience some unfavorable circumstance as a
result of using the internet, discuss some of these circumstances with relevant examples.
B) Assigments
2. What is internet?
a) A network of interconnected local area networks
b) A collection of unrelated computers
c) Interconnection of wide area networks
d) A single network
4. When a collection of various computers appears as a single coherent system to its clients,
what is this called?
a) mail system
b) networking system
c) computer network
d) distributed system
6. Which of the following devices forwards packets between networks by processing the
routing information included in the packet?
a) firewall
b) bridge
c) hub
d) router
9. Which type of network shares the communication channel among all the machines?
a) anycast network
b) multicast network
c) unicast network
d) broadcast network
a) Ring
b) Bus
c) Star
d) Mesh
11. Which of the following allows LAN users to share computer programs and data?
a) File server
b) Network
c) Communication server
d) Print server
14. Which of the following allows you to connect and login to a remote computer?
a) SMTP
b) HTTP
c) FTP
d) Telnet
15. Which type of topology is best suited for large businesses which must carefully control and
coordinate the operation of distributed branch outlets?
a) Ring
b) Local area
c) Hierarchical
d) Star
16. Which of the following transmission directions listed is not a legitimate channel?
a) Simplex
b) Half Duplex
c) Full Duplex
d) Double Duplex
17. What kind of transmission medium is most appropriate to carry data in a computer network
that is exposed to electrical interferences?
b) Optical fiber
c) Coaxial cable
d) Microwave
b) E-mail system
c) Mailing list
a) Protocol
b) URL
c) E-mail address
d) ICQ
SECURITY: Refers to the protection of computer systems, networks, and data from
unauthorized access, misuse, or damage. It involves implementing measures such as firewalls,
encryption, authentication, and access controls to safeguard against various threats, including
malware, hackers, and unauthorized users. Security ensures that sensitive information remains
confidential and that systems are available when needed.
INTEGRITY: Refers to the assurance that data remains accurate, complete, and unaltered
during storage, transmission, and processing. Maintaining data integrity is crucial for ensuring
the reliability and trustworthiness of information within computer systems. Techniques such as
data validation, checksums, digital signatures, and access controls help prevent unauthorized
modifications and maintain data integrity.
CONTROL: Refers to the policies, procedures, and mechanisms used to manage and regulate
access to computer resources and information. Controls help enforce security policies, ensure
compliance with regulations, and mitigate risks by monitoring and governing system activities.
Access controls, auditing, logging, and identity management are examples of controls used to
maintain security and integrity within computer systems.
RELATION: Security, integrity, and control are interrelated concepts in computer security, each
contributing to the overall protection and reliability of information systems. Security measures
are implemented to safeguard systems and data from unauthorized access and malicious
activities, thereby preserving their integrity. Controls are used to enforce security policies,
manage access rights, and monitor system activities to prevent security breaches and maintain
data integrity. Together, security, integrity, and control form a comprehensive framework for
protecting computer systems and ensuring the confidentiality, integrity, and availability of
information.
Introduction: Computer security threats are ever-evolving and can have significant impacts on
individuals, organizations, and even nations. Understanding these threats and implementing
appropriate countermeasures is essential to protect sensitive data, ensure system integrity, and
maintain the availability of resources. Below is an overview of common computer security
threats and their corresponding counter measures.
Phishing: A form of social engineering where attackers deceive individuals into providing
sensitive information, such as usernames, passwords, or credit card details, typically via email or
fraudulent websites.
Denial of Service (DoS) Attacks: Attacks aimed at making a system or network resource
unavailable to users by overwhelming it with a flood of illegitimate requests.
SQL Injection: A code injection technique where malicious SQL statements are inserted into an
entry field for execution, allowing attackers to manipulate or access the database.
Password Attacks: Techniques such as brute force, dictionary attacks, and credential stuffing
used to gain unauthorized access to systems by cracking passwords.
Insider Threats: Security risks originating from within the organization, often involving
employees or contractors who misuse their access privileges.
Counter Measures
Anti-Malware Software:
Install and regularly update antivirus and anti-malware programs to detect and remove malicious
software.
Enable real-time scanning and automatic updates for the latest threat definitions.
Implement spam filters and email authentication protocols (e.g., SPF, DKIM, DMARC) to
reduce phishing attacks.
Educate users about recognizing and avoiding phishing attempts and suspicious links.
Deploy IDS/IPS to monitor network traffic for suspicious activity and prevent potential threats.
Encryption:
Use encryption protocols (e.g., SSL/TLS) to secure data in transit and at rest, ensuring that
intercepted data cannot be read without the decryption key.
Implement strong access control policies and enforce the principle of least privilege.
Use multi-factor authentication (MFA) to add an extra layer of security for user logins.
Keep all software, including operating systems and applications, up to date with the latest
patches to mitigate the risk of zero-day exploits.
Train developers in secure coding practices to prevent vulnerabilities such as SQL injection.
Conduct regular code reviews and use automated tools to scan for security flaws.
Network Segmentation:
Divide the network into segments to limit the spread of attacks and contain potential breaches.
Use VLANs and subnetting to isolate critical systems and sensitive data.
Provide ongoing training for employees on security best practices, recognizing social
engineering attacks, and responding to security incidents.
Develop and maintain an incident response plan to quickly address and mitigate security
breaches.
Computer security threats are diverse and continually evolving, posing significant risks to
information systems and data integrity. By understanding these threats and implementing a
comprehensive set of countermeasures, organizations can effectively protect themselves against
cyber attacks, minimize potential damage, and ensure the security and reliability of their IT
infrastructure. Proactive measures, continuous monitoring, and user education are key
components in maintaining a robust security posture in the face of ever-changing cyber threats.
COMPUTER SECURITY:
Threat Landscape: Computer security encompasses the protection of systems, networks, and
data from a diverse range of threats such as malware, phishing, hacking, and social engineering.
The system includes: software, hardware, operating procedures, data and/or information, and
networks.
Objectives of computer Security: The objectives of computer security encompass several key
principles aimed at protecting information, systems, and networks from unauthorized access,
misuse, or damage. These objectives include:
b) Integrity: Maintaining the accuracy, completeness, and reliability of data and information.
Integrity measures prevent unauthorized modification, deletion, or corruption of data, ensuring
that it remains trustworthy and reliable for users and applications.
c) Availability: Ensuring that resources, systems, and services are accessible and usable when
needed. Availability measures prevent disruptions and downtime caused by system failures,
cyber attacks, or natural disasters, ensuring uninterrupted access to critical resources and
services.
d) Authentication: Verifying the identity of users and entities accessing computer systems,
networks, and data. Authentication measures confirm the legitimacy of user credentials, such as
usernames and passwords, biometric data, or digital certificates, before granting access to
protected resources.
By addressing these objectives, computer security aims to protect information, systems, and
networks from a wide range of threats and vulnerabilities, ensuring the confidentiality, integrity,
and availability of data and resources.
Security Measures
a) Firewalls: Firewalls are network security devices that monitor and control incoming and
outgoing network traffic based on predetermined security rules. They act as a barrier between
trusted internal networks and untrusted external networks, filtering traffic to prevent
unauthorized access and potential security threats.
b) Intrusion Detection Systems (IDS): IDSs are security tools that monitor network or system
activities for malicious activities or policy violations. They analyze network traffic, system logs,
and other sources of information to identify suspicious behavior and alert administrators to
potential security incidents.
c) Intrusion Prevention Systems (IPS): IPSs build upon the capabilities of IDSs by not only
detecting but also actively preventing potential security threats. They can automatically block or
mitigate suspicious network traffic or system activities in real-time to prevent attacks or
unauthorized access.
d) Antivirus Software: Antivirus software is designed to detect, prevent, and remove malicious
software, such as viruses, worms, and Trojans, from computer systems. It scans files and
programs for known malware signatures and behaviors, quarantines or deletes infected files, and
provides real-time protection against new threats.
e) Encryption: Encryption is the process of converting plaintext data into ciphertext to protect it
from unauthorized access or interception. It ensures data confidentiality by making information
unreadable to anyone without the appropriate decryption key. Encryption is used to secure data
transmission over networks, protect sensitive information stored on devices, and ensure the
integrity of data in transit.
f) Access Controls: Access controls limit and regulate user access to computer systems,
networks, and data. They enforce security policies by defining user permissions, privileges, and
restrictions based on roles, responsibilities, and least privilege principles. Access controls can
include password authentication, biometric verification, access control lists (ACLs), and role-
based access control (RBAC).
g) Patch Management: Patch management involves regularly updating and applying software
patches and security updates to address known vulnerabilities and weaknesses in operating
systems, applications, and firmware. It helps reduce the risk of exploitation by malicious actors
and ensures that systems remain secure and protected against emerging threats.
h) Security Awareness Training: Security awareness training educates users about security
risks, best practices, and organizational policies to promote a culture of security within an
organization. It helps users recognize potential threats, avoid common pitfalls, and take
appropriate actions to protect themselves and their organization's assets.
These computer security mechanisms work together to provide a multi-layered defense strategy
against various cyber threats and vulnerabilities, helping organizations safeguard their digital
assets and maintain the confidentiality, integrity, and availability of information.
Security Policies: Establishing and enforcing security policies to define acceptable use, access
levels, and security protocols within an organization.
Incident Response: Developing procedures for incident response and disaster recovery to
minimize the impact of security breaches and ensure business continuity.
Data Integrity
Importance: Data integrity ensures the accuracy, completeness, and reliability of data
throughout its lifecycle. It is essential for maintaining the trustworthiness of information and
preventing unauthorized alterations.
Techniques: Techniques such as data validation, checksums, digital signatures, and encryption
are employed to protect data integrity and prevent unauthorized modifications or corruption.
Validation: Data validation techniques verify the accuracy and reliability of data inputs to
ensure that only valid and authorized information is processed.
Encryption: Encryption techniques transform data into a secure format to prevent unauthorized
access or tampering during transmission or storage.
Controls
Access Controls: Access controls regulate and manage user access to computer systems,
networks, and data. This includes mechanisms like passwords, biometrics, access control lists
(ACLs), and role-based access control (RBAC).
PASSWORD MANAGEMENT:
Password Creation:
Complexity: Encourage the use of complex passwords that include a mix of uppercase and
lowercase letters, numbers, and special characters.
Length: Recommend a minimum password length, typically at least 12 characters.
Avoid Common Passwords: Discourage the use of easily guessable passwords like
"password123" or "admin".
Password Storage:
Encryption: Store passwords in an encrypted format using strong hashing algorithms such as
bcrypt, SHA-256, or Argon2.
Salted Hashing: Use salting along with hashing to protect against rainbow table attacks.
Password Managers: Encourage the use of password managers to generate, store, and retrieve
complex passwords securely.
Multi-Factor Authentication (MFA): Implement MFA to add an extra layer of security beyond
just passwords.
Password Changes:
Regular Updates: Encourage regular password changes, though the necessity of this practice is
debated. Emphasize changing passwords if there is suspicion of compromise.
Secure Reset Processes: Implement secure processes for password resets, such as sending reset
links via email with verification questions.
User Education:
Awareness Training: Educate users about the importance of strong passwords and the risks
associated with poor password practices.
Phishing Awareness: Train users to recognize phishing attempts that could lead to password
compromise.
PASSWORD POLICY:
Policy Definition:
Scope: Define the scope of the password policy, including all systems and applications where it
applies.
Complexity Rules: Define the complexity requirements for passwords (e.g., mix of characters,
minimum length).
Reuse Restrictions: Prohibit the reuse of previous passwords and ensure a history of previous
passwords is maintained.
Account Lockout:
Lockout Duration: Define the duration of the lockout period and the process for unlocking
accounts.
Password Expiry:
Expiration Period: Set a password expiration period, though the current best practice often
suggests focusing more on password strength and less on regular changes.
Password Protection:
Secure Entry: Encourage secure entry of passwords, avoiding logging in on public or unsecured
devices and networks.
Incident Response:
Mitigation: Outline steps for responding to a password breach, including resetting affected
passwords and reviewing security protocols.
Regular Review: Regularly review and update the password policy to address new threats and
incorporate the latest best practices.
Compliance Audits: Conduct periodic audits to ensure compliance with the password policy and
identify areas for improvement.
By implementing robust password management practices and policies, organizations can
significantly enhance their security posture, reduce the risk of unauthorized access, and protect
sensitive information from cyber threats.
Auditing and Monitoring: Auditing and monitoring systems track and record user activities
within an organization's IT infrastructure to detect and investigate security incidents or policy
violations.
Review Questions
3. What are the three core principles of computer security (CIA triad)?
4. Explain the concept of access controls. How do they help in maintaining data security?
Explain the concept of a security audit and its importance in maintaining computer security.