OS Android
OS Android
OS Android
➢ Fidelity.
❖ AVMM provides an environment for programs that is essentially identical to the
original machine.
➢ Performance.
❖ Programs running within that environment show only minor performance
decreases.
➢ Safety.
❖ The VMM is in complete control of system resources.
➢ By the late 1990s, Intel 80x86 CPUs had become common, fast, and rich in features.
➢ Both Xen and VMware created technologies, still used today, to allow guest operating
systems to run on the 80x86.
➢ Virtualization has expanded to include all common CPUs, many commercial and open-
source tools, and many operating systems.
❖ For example, the open-source VirtualBox project (http://www.virtualbox.org)
provides a program that runs on Intel x86 and AMD 64 CPUs and on Windows,
Linux, macOS, and Solaris host operating systems.
❖ Possible guest operating systems include many versions of Windows, Linux,
Solaris, and BSD, including even MS-DOS and IBM OS/2.
➢ One important advantage of virtualization is that the host system is protected from the
virtual machines, just as the virtual machines are protected from each other.
➢ A virus inside a guest operating system might damage that operating system but is
unlikely to affect the host or the other guests.
➢ Since each virtual machine is almost completely isolated from all other virtual
machines, there are almost no protection problems.
➢ A potential disadvantage of isolation is that it can prevent sharing of resources.
➢ Two approaches to providing sharing have been implemented.
❖ First, it is possible to share a file-system volume and thus to share files.
❖ Second, it is possible to define a network of virtual machines, each of which can
send information over the virtual communications network.
➢ One feature common to most virtualization implementations is the ability to freeze, or
suspend, a running virtual machine.
➢ Many operating systems provide that basic feature for processes, but VMMs go one
step further and allow copies and snapshots to be made of the guest. The copy can be
used to create a new VM or to move a VM from one machine to another with its current
state intact.
➢ The guest can then resume where it was, as if on its original machine, creating a clone.
The snapshot records a point in time, and the guest can be reset to that point if necessary
(for example, if a change was made but is no longer wanted).
➢ A virtual machine system is a perfect vehicle for operating-system research and
development.
➢ The operating system runs on and controls the entire machine, so the system must be
stopped and taken out of use while changes are made and tested. This period is
commonly called system-development time.
➢ A virtual-machine system can eliminate much of this latter problem. System
programmers are given their own virtual machine, and system development is done on
the virtual machine instead of on a physical machine.
➢ Another advantage of virtual machines for developers is that multiple operating systems
can run concurrently on the developer’s workstation. This virtualized workstation
allows for rapid porting and testing of programs in varying environments.
➢ In addition, multiple versions of a program can run, each in its own isolated operating
system, within one system.
➢ A major advantage of virtual machines in production data-centre use is system
consolidation, which involves taking two or more separate systems and running them
in virtual machines on one system. Such physical-to-virtual conversions result in
resource optimization, since many lightly used systems can be combined to create one
more heavily used system.
➢ A virtual environment might include 100 physical servers, each running 20 virtual
servers. Without virtualization, 2,000 servers would require several system
administrators. With virtualization and its tools, the same work can be managed by one
or two administrators.
✓ One of the tools that make this possible is templating, in which one standard
virtual machine image, including an installed and configured guest operating
system and applications, is saved and used as a source for multiple running
VMs.
✓ Other features include managing the patching of all guests, backing up and
restoring the guests, and monitoring their resource use.
➢ Virtualization can improve not only resource utilization but also resource management.
➢ Some VMMs include a live migration feature that moves a running guest from one
physical server to another without interrupting its operation or active network
connections.
➢ Virtualization has laid the foundation for many other advances in computer facility
implementation, management, and monitoring.
➢ Cloud computing, for example, is made possible by virtualization in which resources
such as CPU, memory, and I/O are provided as services to customers using Internet
technologies.
5.3 BUILDING BLOCKS
➢ The ability to virtualize depends on the features provided by the CPU. If the features
are sufficient, then it is possible to write a VMM that provides a guest environment.
Otherwise, virtualization is impossible.
➢ VMMs use several techniques to implement virtualization, including trap-and-emulate
and binary translation.
➢ The important concept found in most virtualization options is the implementation of a
virtual CPU (VCPU).
➢ The VCPU does not execute code. Rather, it represents the state of the CPU as the guest
machine believes it to be. For each guest, the VMM maintains a VCPU representing
that guest’s current CPU state.
➢ When the guest is context-switched onto a CPU by the VMM, information from the
VCPU is used to load the right context, much as a general-purpose operating system
would use the PCB.
5.3.1 Trap-and-Emulate
➢ On a typical dual-mode system, the virtual machine guest can execute only in user mode
(unless extra hardware support is provided). The kernel, of course, runs in kernel mode,
and it is not safe to allow user-level code to run in kernel mode.
➢ Just as the physical machine has two modes, so must the virtual machine. Consequently,
we must have a virtual user mode and a virtual kernel mode, both of which run in
physical user mode.
➢ Those actions that cause a transfer from user mode to kernel mode on a real machine
(such as a system call, an interrupt, or an attempt to execute a privileged instruction)
must also cause a transfer from virtual user mode to virtual kernel mode in the virtual
machine.
➢ When the kernel in the guest attempts to execute a privileged instruction, that is an error
(because the system is in user mode) and causes a trap to the VMM in the real machine.
The VMM gains control and executes (or “emulates”) the action that was attempted by
the guest kernel on the part of the guest. It then returns control to the virtual machine.
This is called the trap-and-emulate
Figure: 5.1 Trap-and-emulate virtualization implementation.
➢ With privileged instructions, time becomes an issue. All nonprivileged instructions run
natively on the hardware, providing the same performance for guests as native applications.
➢ Privileged instructions create extra overhead, however, causing the guest to run more
slowly than it would natively.
➢ In addition, the CPU is being multiprogrammed among many virtual machines, which can
further slowdown the virtual machines in unpredictable ways.
➢ This problem has been approached in various ways.
❖ IBM VM, for example, allows normal instructions for the virtual machines to
execute directly on the hardware.
❖ Only the privileged instructions (needed mainly for I/O must be emulated and hence
execute more slowly.
➢ In general, with the evolution of hardware, the performance of trap-and-emulate
functionality has been improved, and cases in which it is needed have been reduced.
❖ For example, many CPUs now have extra modes added to their standard dual-mode
operation.
❖ The VCPU need not keep track of what mode the guest operating system is in,
because the physical CPU performs that function.
❖ In fact, some CPUs provide guest CPU state management in hardware, so the VMM
need not supply that functionality, removing the extra overhead.
5.3.2 Binary Translation
➢ Without some level of hardware support, virtualization would be impossible. The more
hardware support available within a system, the more feature-rich and stable the virtual
machines can be and the better they can perform.
➢ Intel x86 CPU family, Intel added new virtualization support (the VT-x instructions), in
successive generations beginning in 2005. Now, binary translation is no longer needed.
➢ In fact, all major general-purpose CPUs now provide extended hardware support for
virtualization. For example, AMD virtualization technology (AMDV) has appeared in
several AMD processors starting in 2006.
➢ It defines two new modes of operation—host and guest—thus moving from a dual-mode
to a multimode processor.
➢ The VMM can enable host mode, define the characteristics of each guest virtual machine,
and then switch the system to guest mode, passing control of the system to a guest operating
system that is running in the virtual machine.
➢ In guest mode, the virtualized operating system thinks it is running on native hardware and
sees whatever devices are included in the host’s definition of the guest.
➢ If the guest tries to access a virtualized resource, then control is passed to the VMM to
manage that interaction.
➢ The functionality in Intel VT-x is similar, providing root and nonroot modes, equivalent to
host and guest modes. Both provide guest VCPU state data structures to load and save guest
CPU state automatically during guest context switches.
➢ In addition, virtual machine control structures (VMCSs) are provided to manage guest and
host state, as well as various guest execution controls, exit controls, and information about
why guests exit back to the host.
Fig:5.3 Hardware Support for Virtualization in the Intel x86 Processor
➢ AMD and Intel have also addressed memory management in the virtual environment. With
AMD’s RVI and Intel’s EPT memory-management enhancements, VMMs no longer need
to implement software NPTs.
➢ All modern x86 CPUs include a memory management unit (MMU) and a translation
lookaside buffer (TLB) to optimize virtual memory performance.
➢ However, in a virtual execution environment, virtual memory virtualization involves
sharing the physical system memory in RAM and dynamically allocating it to the physical
memory of the VMs.
➢ That means a two-stage mapping process should be maintained by the guest OS and the
VMM, respectively: virtual memory to physical memory and physical memory to machine
memory.
➢ Furthermore, MMU virtualization should be supported, which is transparent to the guest
OS.
➢ The guest OS continues to control the mapping of virtual addresses to the physical memory
addresses of VMs.
➢ But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory.
Fig:5.4 Virtual Memory Address Translation
➢ When a virtual address needs to be translated, the CPU will first look for the L4 page table
pointed to by Guest CR3.
➢ Since the address in Guest CR3 is a physical address in the guest OS, the CPU needs to
convert the Guest CR3 GPA to the host physical address (HPA) using EPT.
➢ In this procedure, the CPU will check the EPT TLB to see if the translation is there. If there
is no required translation in the EPT TLB, the CPU will look for it in the EPT.
➢ If the CPU cannot find the translation in the EPT, an EPT violation exception will be raised.
➢ When the GPA of the L4 page table is obtained, the CPU will calculate the GPA of the L3
page table by using the GVA and the content of the L4 page table.
➢ If the entry corresponding to the GVA in the L4 page table is a page fault, the CPU will
generate a page fault interrupt and will let the guest OS kernel handle the interrupt.
➢ When the GPA of the L3 page table is obtained, the CPU will look for the EPT to get the
HPA of the L3 page table
➢ To get the HPA corresponding to a GVA, the CPU needs to look for the EPT five times,
and each time, the memory needs to be accessed four times.
5.4TYPES OF VIRTUAL MACHINES AND THEIR
IMPLEMENTATIONS
➢ Type 0 hypervisors have existed for many years under many names, including
“partitions” and “domains.”
➢ They are a hardware feature, and that brings its own positives and negatives.
➢ Operating systems need do nothing special to take advantage of their features.
➢ The VMM itself is encoded in the firmware and loaded at boot time. In turn, it loads
the guest images to run in each partition.
➢ The feature set of a type 0 hypervisor tends to be smaller than those of the other types
because it is implemented in hardware.
❖ For example, a system might be split into four virtual systems, each with
dedicated CPUs, memory, and I/O devices.
❖ Each guest believes that it has dedicated hardware because it does, simplifying
many implementation details.
➢ I/O presents some difficulty, because it is not easy to dedicate I/O devices to guests if
there are not enough.
➢ In these cases, the hypervisor manages shared access or grants all devices to a control
partition.
➢ In the control partition, a guest operating system provides services (such as networking)
via daemons to other guests, and the hypervisor routes I/O requests appropriately.
➢ Some type 0 hypervisors are even more sophisticated and can move physical CPUs and
memory between running guests. In these cases, the guests are Para virtualized, aware
of the virtualization and assisting in its execution.
❖ For example, a guest must watch for signals from the hardware or VMM, that a
hardware change has occurred, probe its hardware devices to detect the change,
and add or subtract CPUs or memory from its available resources.
➢ A type 0 hypervisor can run multiple guest operating systems (one in each hardware
partition).
➢ All of those guests, because they are running on raw hardware, can in turn be VMMs.
➢ Essentially, each guest operating system in a type 0 hypervisor is a native operating
system with a subset of hardware made available to it.
Figure:5.5 Type 0 hypervisor
➢ A type 1 hypervisor acts like a lightweight operating system and runs directly on the
host’s hardware.
➢ The most commonly deployed type of hypervisor is the type 1 or bare-metal hypervisor,
where virtualization software is installed directly on the hardware where the operating
system is normally installed.
➢ Because bare-metal hypervisors are isolated from the attack-prone operating system,
they are extremely secure.
➢ In addition, they generally perform better and more efficiently than hosted hypervisors.
➢ For these reasons, most enterprise companies choose bare-metal hypervisors for data
centre computing needs.
➢ By using type 1 hypervisors, data-center managers can control and manage the
operating systems and applications in new and sophisticated ways.
➢ An important benefit is the ability to consolidate more operating systems and
applications onto fewer systems.
❖ For example, rather than having ten systems running at 10 percent utilization each,
a data center might have one server manage the entire load.
➢ If utilization increases, guests and their applications can be moved to less-loaded
systems live, without interruption of service.
➢ Using snapshots and cloning, the system can save the states of guests and duplicate
those states.
➢ Another type of type 1 hypervisor includes various general-purpose operating systems
with VMM functionality.
❖ Here, an operating system such as Red- Hat Enterprise Linux, Windows, or Oracle
Solaris performs its normal duties as well as providing a VMM allowing other
operating systems to run as guests.
➢ These hypervisors are usually used in environments where there are a small number of
servers.
➢ They do not need a separate management console to set up and manage the virtual
machines. These operations can typically be done on the server that has the hypervisor
hosted. This hypervisor is basically treated as an application on your host system.
➢ The above Fig illustrates the concept of a para virtualized VM architecture. The guest
operating systems are para-virtualized.
➢ They are assisted by an intelligent compiler to replace the nonvirtualizable OS instructions
by hypercalls as illustrated in Figure
➢ The traditional x86 processor offers four instruction execution rings: Rings 0, 1, 2, and 3.
➢ The lower the ring number, the higher the privilege of instruction being executed.
➢ The OS is responsible for managing the hardware and the privileged instructions to execute
at Ring 0, while user-level applications run at Ring 3.
➢ When the x86 processor is virtualized, a virtualization layer is inserted between the
hardware and the OS.
➢ According to the x86 ring definition, the virtualization layer should also be installed at Ring
0.
➢ In the above figure shown para-virtualization replaces nonvirtualizable instructions with
hypercalls that communicate directly with the hypervisor or VMM.
➢ However, when the guest OS kernel is modified for virtualization, it can no longer run on
the hardware directly.
➢ The guest OS kernel is modified to replace the privileged and sensitive instructions with
hypercalls to the hypervisor or VMM.
➢ The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies
that the guest OS may not be able to execute some privileged and sensitive instructions.
➢ The privileged instructions are implemented by hypercalls to the hypervisor. After
replacing the instructions with hypercalls, the modified guest OS emulates the behavior of
the original guest OS.
➢ The above figure shows a Solaris 10 system with two containers and the standard “global”
user space.
➢ Containers are much lighter weight than other virtualization methods. That is, they use
fewer system resources and are faster to instantiate and destroy, more similar to processes
than virtual machines. For this reason, they are becoming more commonly used, especially
in cloud computing.
➢ FreeBSD was perhaps the first operating system to include a container-like feature (called
“jails”)
➢ Linux added the LXC container feature in 2014. It is now included in the common Linux
distributions via a flag in the clone() system call.
➢ Containers are also easy to automate and manage, leading to orchestration tools like docker
and Kubernetes.
➢ Orchestration tools are means of automating and coordinating systems and services. Their
aim is to make it simple to run entire suites of distributed applications, just as operating
systems make it simple to run a single program.
➢ In this section, we take a deeper dive into the operating-system aspects of virtualization,
including how the VMM provides core operating-system functions like scheduling, I/O,
and memory management.
5.5.1 CPU Scheduling
➢ A system with virtualization, even a single-CPU system, frequently acts like a
multiprocessor system.
➢ The virtualization software presents one or more virtual CPUs to each of the virtual
machines running on the system and then schedules the use of the physical CPUs among
the virtual machines.
➢ The significant variations among virtualization technologies make it difficult to
summarize the effect of virtualization on scheduling.
❖ The VMM has a number of physical CPUs available and a number of threads to run
on those CPUs. The threads can be VMM threads or guest threads.
❖ Guests are configured with a certain number of virtual CPUs at creation time, and
that number can be adjusted throughout the life of the VM.
❖ When there are enough CPUs to allocate the requested number to each guest, the
VMM can treat the CPUs as dedicated and schedule only a given guest’s threads on
that guest’s CPUs. In this situation, the guests act much like native operating
systems running on native CPUs.
➢ The VMM itself needs some CPU cycles for guest management and I/O management
and can steal cycles from the guests by scheduling its threads across all of the system
CPUs, but the impact of this action is relatively minor.
➢ More difficult is the case of overcommitment, in which the guests are configured for
more CPUs than exist in the system. Here, a VMM can use standard scheduling
algorithms to make progress on each thread but can also add a fairness aspect to those
algorithms.
❖ For example, if there are six hardware CPUs and twelve guest allocated CPUs,
the VMM can allocate CPU resources proportionally, giving each guest half of
the CPU resources it believes it has.
❖ The VMM can still present all twelve virtual CPUs to the guests, but in mapping
them onto physical CPUs, the VMM can use its scheduler to distribute them
appropriately.
➢ Within a virtual machine, this operating system receives only what CPU resources the
virtualization system gives it. A 100-millisecond time slice may take much more than
100 milliseconds of virtual CPU time.
➢ Depending on how busy the system is, the time slice may take a second or more,
resulting in very poor response times for users logged into that virtual machine. The
effect on a real-time operating system can be even more serious.
5.5.2 Memory Management
➢ Efficient memory use in general-purpose operating systems is a major key to
performance.
➢ In virtualized environments, there are more users of memory (the guests and their
applications, as well as the VMM), leading to more pressure on memory use.
➢ Further adding to this pressure is the fact that VMMs typically overcommit memory,
so that the total memory allocated to guests exceeds the amount that physically exists
in the system.
➢ The extra need for efficient memory use is not lost on the implementers of VMMs, who
take extensive measures to ensure the optimal use of memory.
❖ For example, VMware ESX uses several methods of memory management.
Before memory optimization can occur, the VMM must establish how much
real memory each guest should use. To do that, the VMM first evaluates each
guest’s maximum memory size.
❖ Next, the VMM computes a target real-memory allocation for each guest based
on the configured memory for those guest and other factors, such as
overcommitment and system load.
5.5.2.1 Mechanisms to reclaim memory from the guests
➢ One approach is to provide double paging. Here, the VMM has its own page-
replacement algorithms and loads pages into a backing store that the guest believes is
physical memory.
➢ A common solution is for the VMM to install in each guest a pseudo– device driver or
kernel module that the VMM controls. (A pseudo–device driver uses device-driver
interfaces, appearing to the kernel to be a device driver, but does not actually control a
device. Rather, it is an easy way to add kernel-mode code without directly modifying
the kernel.) This balloon memory manager communicates with the VMM and is told to
allocate or deallocate memory. If told to allocate, it allocates memory and tells the
operating system to pin the allocated pages into physical memory.
➢ Another common method for reducing memory pressure is for the VMM to determine
if the same page has been loaded more than once. If this is the case, the VMM reduces
the number of copies of the page to one and maps the other users of the page to that one
copy.
5.5.3 I/O
➢ An operating system’s device-driver mechanism provides a uniform interface to the
operating system whatever the I/O device.
➢ Device-driver interfaces are designed to allow third-party hardware manufacturers to
provide device drivers connecting their devices to the operating system.
➢ Virtualization takes advantage of this built-in flexibility by providing specific
virtualized devices to guest operating systems.
➢ VMMs vary greatly in how they provide I/O to their guests. I/O devices may be
dedicated to guests, for example, or the VMM may have device drivers onto which it
maps guest I/O.
➢ The VMM may also provide idealized device drivers to guests. In this case, the guest
sees an easy-to-control device, but in reality, that simple device driver communicates
to the VMM, which sends the requests to a more complicated real device through a
more complex real device driver.
➢ I/O in virtual environments is complicated and requires careful VMM design and
implementation.
➢ Hypervisor and hardware combination allow direct access to improve I/O performance.
➢ With type 0 hypervisors that provide direct device access, guests can often run at the
same speed as native operating systems.
➢ With direct device access in type 1 and 2 hypervisors, performance can be similar to
that of native operating systems if certain hardware support is present.
➢ The hardware needs to provide DMA pass-through with facilities like VT-d, as well as
direct interrupt delivery (interrupts going directly to the guests).
➢ In addition to direct access, VMMs provide shared access to devices.
❖ Consider a disk drive to which multiple guests have access. The VMM must
provide protection while the device is being shared, assuring that a guest can
access only the blocks specified in the guest’s configuration.
➢ General-purpose operating systems typically have one Internet protocol (IP) address,
although they sometimes have more than one.
➢ With virtualization, each guest needs at least one IP address, because that is the guest’s
main mode of communication. Therefore, a server running a VMM may have dozens
of addresses, and the VMM acts as a virtual switch to route the network packets to the
addressed guests.
➢ The guests can be “directly” connected to the network by an IP address that is seen by
the broader network (this is known as bridging). Alternatively, the VMM can provide
a network address translation (NAT) address.
➢ The NAT address is local to the server on which the guest is running, and the VMM
provides routing between the broader network and the guest.
Storage Management
➢ Virtualized environments need to approach storage management differently than do
native operating systems.
➢ If multiple operating systems have been installed, what and where is the boot disk.
❖ The solution to this problem depends on the type of hypervisor. Type 0
hypervisors often allow root disk partitioning, partly because these systems tend
to run fewer guests than other systems.
❖ Alternatively, a disk manager may be part of the control partition, and that disk
manager may provide disk space (including boot disks) to the other partitions.
❖ Type 1 hypervisors store the guest root disk (and configuration information) in
one or more files in the file systems provided by the VMM.
❖ Type 2 hypervisors store the same information in the host operating system’s
file systems.
❖ A disk image, containing all of the contents of the root disk of the guest, is
contained in one file in the VMM.
➢ Moving a virtual machine from one system to another that runs the same VMM is as
simple as halting the guest, copying the image to the other system, and starting the guest
there.
➢ Guests sometimes need more disk space than is available in their root disk image.
❖ For example, a nonvirtualized database server might use several file systems
spread across many disks to store various parts of the database. Virtualizing
such a database usually involves creating several files and having the VMM
present those to the guest as disks. The guest then executes as usual, with the
VMM translating the disk I/O requests coming from the guest into file I/O
commands to the correct files.
➢ VMMs provide a mechanism to capture a physical system as it is currently configured
and convert it to a guest that the VMM can manage and run. This physical-to-virtual
(P-to-V) conversion reads the disk blocks of the physical system’s disks and stores them
in files on the VMM’s system or on shared storage that the VMM can access.
➢ VMMs also provide a virtual-to physical (V-to-P) procedure for converting a guest to
a physical system. This procedure is sometimes needed for debugging.
5.5.4 Live Migration
➢ One feature not found in general-purpose operating systems but found in type 0 and
type 1 hypervisors is the live migration of a running guest from one system to another.
5.5.4.1 Working of Live Migration
➢ A running guest on one system is copied to another system running the same VMM.
➢ The copy occurs with so little interruption of service that users logged in to the guest,
as well as network connections to the guest, continue without noticeable impact.
➢ This rather astonishing ability is very powerful in resource management and hardware
administration.
➢ After all, compare it with the steps necessary without virtualization
❖ we must warn users
❖ shut down the processes,
❖ possibly move the binaries,
❖ restart the processes on the new system.
➢ Only then can users access the services again.
➢ With live migration, we can decrease the load on an overloaded system or make
hardware or system changes with no discernable disruption for users.
5.5.4.2 The VMM migrates a guest via the following steps:
➢ The source VMM establishes a connection with the target VMM and confirms that it is
allowed to send a guest.
➢ The target creates a new guest by creating a new VCPU, new nested page table, and
other state storage.
➢ The source sends all read-only memory pages to the target.
➢ The source sends all read–write pages to the target, marking them as clean.
➢ The source repeats step 4, because during that step some pages were probably modified
by the guest and are now dirty. These pages need to be sent again and marked again as
clean.
➢ When the cycle of steps 4 and 5 becomes very short, the source VMM freezes the guest,
sends the VCPU’s final state, other state details, and the final dirty pages, and tells the
target to start running the guest. Once the target acknowledges that the guest is running,
the source terminates the guest.
5.6.2 ANDROID
➢ The Android operating system was designed by the Open Handset Alliance (led
primarily by Google) and was developed for Android smartphones and tablet
computers.
➢ Whereas iOS is designed to run on Apple mobile devices and is close-sourced, Android
runs on a variety of mobile platforms and is opensourced.
Figure 5.13: Architecture of Google’s Android
➢ The structure of Android appears in Figure.
➢ Android is similar to iOS in that it is a layered stack of software that provides a rich set of
frameworks supporting graphics, audio, and hardware features.
➢ These features, in turn, provide a platform for developing mobile, applications that run on
a multitude of Android-enabled devices.
➢ Software designers for Android devices develop applications in the Java language, but
they do not generally use the standard Java API.
➢ Google has designed a separate Android API for Java development. Java applications are
compiled into a form that can execute on the Android RunTime ART, a virtual machine
designed for Android and optimized for mobile devices with limited memory and CPU
processing capabilities.
➢ Java programs are first compiled to a Java bytecode .class file and then translated into an
executable .dex file.
➢ Whereas many Java virtual machines perform just-in-time (JIT) compilation to improve
application efficiency, ART performs ahead-of-time (AOT) compilation. Here, .dex files
are compiled into native machine code when they are installed on a device, from which
they can execute on the ART.
➢ AOT compilation allows more efficient application execution as well as reduced power
consumption, features that are crucial for mobile systems.
➢ Android developers can also write Java programs that use the Java native interface—or
JNI—which allows developers to bypass the virtual machine and instead write Java
programs that can access specific hardware features.
➢ Programs written using JNI are generally not portable from one hardware device to
another.
➢ The set of native libraries available for Android applications includes frameworks for
developing web browsers (webkit), database support (SQLite), and network support, such
as secure sockets (SSLs).
➢ Because Android can run on an almost unlimited number of hardware devices, Google
has chosen to abstract the physical hardware through the hardware abstraction layer, or
HAL.
➢ By abstracting all hardware, such as the camera, GPS chip, and other sensors, the HAL
provides applications with a consistent view independent of specific hardware.
➢ This feature, of course, allows developers to write programs that are portable across
different hardware platforms.
➢ The standard C library used by Linux systems is the GNU C library (glibc). Google instead
developed the Bionic standard C library for Android.
➢ Not only does Bionic have a smaller memory footprint than glibc, but it also has been
designed for the slower CPUs that characterize mobile devices.
➢ At the bottom of Android’s software stack is the Linux kernel. Google has modified the
Linux kernel used in Android in a variety of areas to support the special needs of mobile
systems, such as power management.
➢ It has also made changes in memory management and allocation and has added a new form
of IPC known as Binder
REVIEW QUESTIONS
PART-A
(2-Marks)
1. What is Virtual Machine
➢ A virtual machine (VM) is a virtual environment which functions as a virtual
computer system with its own CPU, memory, network interface, and storage,
created on a physical hardware system.
➢ A piece of software called a hypervisor, or virtual machine manager, lets you run
different operating systems on different virtual machines at the same time.
16. What are the system call interfaces provided by the Darvin’s layered system.
➢ Darwin provides two system-call interfaces: Mach system calls (known as traps)
and BSD system calls (which provide POSIX functionality).
17. List down the fundamental operating system services provided by the iOS.
➢ Memory management,
➢ CPU scheduling,
➢ Interprocess communication (IPC) facilities such as message passing and remote
procedure calls (RPCs).
18. What is Android OS?
➢ The Android operating system was designed by the Open Handset Alliance (led
primarily by Google) and was developed for Android smartphones and tablet
computers.
➢ Whereas iOS is designed to run on Apple mobile devices and is close-sourced,
Android runs on a variety of mobile platforms and is open sourced.