Robotics
Robotics
Right ?
01 Kalab Solomon 02 Mohammed
● Robots Awol
● Planning and
● Robots Hardware
Control
● What kind of problem i
● Planning Uncertain
robotics solving?
Movements
● Robotic Perception
● Reinforcement
Learning in Robotics
03 Tigistu Shewangzaw
● Humans and Robots
● Alternative Robotic
Frameworks
● Application Domains
INTRODUCTION
Ok so first of all
What is Robots
Introduction
● Robots are physical agents that manipulate the physical world using
effectors like legs, wheels, joints, and grippers.
● Robots are equipped with sensors, such as cameras, radars, lasers,
and microphones, to perceive their environment and measure their
own state.
● Maximizing expected utility for a robot involves choosing the right
physical forces to assert through its effectors, aiming for changes in
state that accumulate maximum expected reward.
● Robots operate in partially observable and stochastic environments,
where predictions about the environment and people's behavior are
necessary.
● Robotics involves modeling the environment with continuous state and
action spaces, often in high-dimensional settings with complex joint
configurations.
Introduction…
● Robotic learning is constrained by real-time operation and the
risks associated with real-world trials.
● Transferring knowledge learned in simulation to real-world
robots is an active research area (sim-to-real problem).
● Prior knowledge about the robot, physical environment, and
tasks is essential for quick and safe learning in practical
robotic systems.
● Robotics encompasses various concepts such as probabilistic
state estimation, perception, planning, unsupervised learning,
reinforcement learning, and game theory.
Robot Hardware
Type of robots from the hardware
perspective
Manipulators: These are robot arms that can be
attached to a robot body or bolted onto a table
or floor. They vary in size and payload capacity,
from large industrial manipulator arms used in
car assembly to wheelchair-mountable arms
designed to assist people with motor
impairments.
Passive Sensors: Cameras are examples of passive sensors that observe the
environment by capturing visual signals. They provide information about
objects, colors, shapes, and other visual characteristics.
Active Sensors: Active sensors emit energy into the environment and
measure the reflected or returned signals. Sonar, which uses sound waves,
and structured light, which projects patterns onto a scene, are examples of
active sensors that provide depth and distance information.
Range Finders: These sensors measure the distance to nearby objects. Sonar
sensors emit sound waves and measure the time and intensity of the
returning signal to determine the distance. Stereo vision, which uses multiple
cameras to compute depth from parallax, is another range-finding technique.
Location Sensors
Global Positioning System (GPS): GPS is commonly used for outdoor
localization. It works by measuring the distances to satellites and
triangulating the signals to determine the robot's absolute location on Earth.
Joints: Joints connect rigid bodies and enable movement. Revolute joints
allow rotation between two links, while prismatic joints enable sliding motion.
There are also multi-axis joints, such as spherical, cylindrical, and planar
joints.
P(Xt+1 | z1:t+1,a1:t)
Xt = (xt, yt ,θt)T
Localization and mapping
Kinematic Approximation: Modeling robot
motion using translational and rotational
velocities
These can be estimated using dynamic Bayes networks and sensor models.
Monte Carlo localization uses particle filtering for mobile robot localization.
The EKF combines motion models and sensor measurements for localization.
• Motion Planning
Control
The goal
❑to find a sequence of actions that will take the robot from
its current position to a desired goal position, while
avoiding obstacles and adhering to constraints such
as speed limits and collision avoidance.
The goal
enable robots to navigate through environments that are
uncertain or unpredictable
improve the reliability and safety of robotic systems
enable robots to operate in a wider range of environments
Overall
improve the ability of robots to operate safely and effectively
in real-world environments
RL
• a type of machine learning that involves an agent learning to make a
sequence of decisions in an environment to maximize a cumulative
reward signal(how well the agent is performing the task it is trying to
learn). The agent interacts with the environment by taking actions, and
receives feedback in the form of rewards or penalties based on the
quality of its actions.
• well-suited for controlling robots. In RL, an agent learns to make
decisions by interacting with an environment and receiving feedback in
the form of rewards or penalties.
Robots are designed and built to assist humans in specific tasks, and they
can work with humans in various environments.
However, building robots that can work effectively with humans can be
challenging as it requires coordination and cooperation between the robot
and humans.
The reward function of the robot needs to align with the actions and goals of
the humans, and the robot must choose its actions in a way that meshes well
with those of the humans. Overall, humans and robots can work together in a
collaborative manner to achieve common goals.
Coordination
When building robots to work around humans, the coordination problem
arises - the robot's reward depends not only on its own actions but also on the
actions of the humans in the same environment.
Cooperation
cooperation requires that robots and humans can communicate with each
other, trust each other's decision-making and actions, ensure safety, and
protect privacy and security
To overcome this, robots sometimes assume that the person is ignoring the
robot and model the person as noisily optimal with respect to their objective.
Splitting prediction from action makes it easier for the robot to handle
interaction with humans, but it can lead to a sacrifice in performance, much
like splitting estimation from motion or planning from control.
The previous picture shows:- the robot is tracking a human’s
location and as the human moves, the robot updates its belief over
human goals. As the human heads toward the windows, the robot
increases the probability that the goal is to look out the window, and
decreases the probability that the goal is going to the kitchen, which
is in the other direction.
This is how the human’s past actions end up informing the robot
about what the human will do in the future. Having a belief about the
human’s goal helps the robot anticipate what next actions the
human will take. The heatmap in the figure shows the robot’s future
predictions: red is most probable; blue least probable.
● Making predictions by assuming that people are noisily rational
given their goal: the robot uses the past actions to update a
belief over what goal the person is heading to, and then uses
the belief to make predictions about future actions.
alternative robotic
frameworks that depart from
the traditional rational agent
approach and offer different
ways of conceptualizing and
designing robots.
Reactive controllers
Home care
Robots are being developed for personal use in homes to assist older
adults and people with motor impairments with daily tasks, enabling
them to live more independently. These robots are becoming more
autonomous and may soon be operated by brain-machine
interfaces.
Health care