Chapter 4 Hci
Chapter 4 Hci
Chapter 4 Hci
In the 1940s and 1950s, computing power grew with advancements in hardware like vacuum tubes,
transistors, and integrated chips. By the 1960s, J. C. R. Licklider promoted human-centered computing,
leading to time-sharing systems that allowed multiple users to interact with computers. This shift
enabled interactive, real-time collaboration between users and computers, marking the birth of true
human-computer interaction.
In the mid-1950s, researchers began experimenting with displaying information on video screens,
initially for military use. Ivan Sutherland's 1962 Sketchpad program revolutionized computing by
enabling the creation and manipulation of visual models on a screen, showing that computers could
handle visual abstractions. Sketchpad demonstrated how computers could present data in human-
friendly ways, making interaction more intuitive.
Douglas Engelbart's goal was to use computers to enhance human problem-solving by providing tools for
learning and collaboration. His team at Stanford developed innovative technologies like word processing
and the mouse, with a focus on creating programming toolkits. Engelbart’s approach, known as
bootstrapping, involved building simple components to create increasingly complex and powerful
systems.
In the 1970s, Seymour Papert’s LOGO language made programming accessible to children, allowing them
to control a turtle to draw shapes. Alan Kay, inspired by Papert and Engelbart, envisioned personal
computers that were powerful yet easy for novices to use. His work at Xerox PARC led to the
development of Smalltalk and the concept of the Dynabook, a vision of a portable, powerful personal
computer.
Personal computing systems evolved to support multitasking and flexible user interactions, allowing
users to switch between tasks easily. The WIMP (Windows, Icons, Menus, Pointers) interface, introduced
by Xerox in 1981, became a common method for organizing user-dialog threads on a display.
Metaphors are powerful tools for teaching new concepts, like the turtle metaphor in LOGO for children
or the desktop metaphor for file management. However, metaphors can become limiting or confusing
when users try to apply them beyond their initial context, such as with typewriters or floppy disk icons.
In virtual reality, metaphors take on a more immersive role, interpreting user actions within a virtual
world, but they also face challenges like cultural biases and the need for complex tracking systems.
Direct manipulation refers to interacting with objects on the screen in a visual and intuitive way, like
dragging a file from one folder to another. It provides instant feedback, makes errors easier to correct,
and helps users feel more engaged by directly interacting with the task at hand. This concept also blends
input and output, making the user experience seamless. Related to this is the WYSIWYG (What You See Is
What You Get) interface, which shows users a close visual representation of their final product, although
it can have limitations for more complex tasks.
Direct manipulation interfaces make simple tasks easier but can struggle with more complex ones. In
contrast, language-based systems, where the interface interprets user commands, allow for more
complex operations but may require deeper understanding. Combining both paradigms, like in
programming by example, allows users to perform tasks while the system generates repeatable
procedures.
4.2.9 Hypertext:
In 1945, Vannevar Bush envisioned a device called the "memex" to help manage and retrieve
interconnected information, mimicking human associative links. Ted Nelson expanded on this idea,
coining the term "hypertext" in the 1960s for non-linear text linking. This led to the development of
hypermedia, a system for storing and navigating all forms of media in a non-linear way.
4.2.10 Multi-modality:
Most interactive systems use traditional input devices like a keyboard and mouse, along with a display
screen for output. Multi-modal systems, however, use multiple communication channels (e.g., visual,
audio, touch) simultaneously for both input and output, mimicking human interaction. These systems
are a key area of current research in interactive design.
The World Wide Web (WWW), developed by Tim Berners-Lee in 1989, revolutionized the internet with
its easy-to-use graphical interface, enabling global access to multimedia information. It uses protocols
like HTTP and HTML, allowing anyone with internet access to publish content. The web has grown into a
social and commercial phenomenon, reshaping the way people connect and share information.
Software agents act on behalf of users, performing tasks like filtering emails or suggesting actions based
on user behavior. These agents can be simple, following predefined rules, or intelligent, learning from
user actions. The challenge lies in developing an effective communication system between the user and
the agent, especially when acting in the user's absence.
Ubiquitous computing aims to make computers part of our everyday environment, blending seamlessly
into daily life. This involves creating devices of various sizes, from personal gadgets like phones to large
public displays, all connected and interactive. Technologies like wireless networking and voice
recognition are helping bring this vision closer to reality.