0% found this document useful (0 votes)
21 views

Robotics

Robotics problems involve developing computational frameworks and techniques to solve nondeterministic, partially observable problems involving robots interacting in cooperative and competitive scenarios. Key aspects of robotics problem-solving include representing the environment with models like MDPs and POMDPs, designing reward functions that align with human objectives, defining appropriate action, state and observation spaces, and employing hierarchical planning to break problems into manageable levels of task, motion and control.

Uploaded by

Nasis Dereje
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Robotics

Robotics problems involve developing computational frameworks and techniques to solve nondeterministic, partially observable problems involving robots interacting in cooperative and competitive scenarios. Key aspects of robotics problem-solving include representing the environment with models like MDPs and POMDPs, designing reward functions that align with human objectives, defining appropriate action, state and observation spaces, and employing hierarchical planning to break problems into manageable levels of task, motion and control.

Uploaded by

Nasis Dereje
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 75

Robotics

Let’s start our session

Right ?
01 Kalab Solomon 02 Mohammed
● Robots Awol
● Planning and
● Robots Hardware
Control
● What kind of problem i
● Planning Uncertain
robotics solving?
Movements
● Robotic Perception
● Reinforcement
Learning in Robotics
03 Tigistu Shewangzaw
● Humans and Robots
● Alternative Robotic
Frameworks
● Application Domains
INTRODUCTION
Ok so first of all
What is Robots
Introduction
● Robots are physical agents that manipulate the physical world using
effectors like legs, wheels, joints, and grippers.
● Robots are equipped with sensors, such as cameras, radars, lasers,
and microphones, to perceive their environment and measure their
own state.
● Maximizing expected utility for a robot involves choosing the right
physical forces to assert through its effectors, aiming for changes in
state that accumulate maximum expected reward.
● Robots operate in partially observable and stochastic environments,
where predictions about the environment and people's behavior are
necessary.
● Robotics involves modeling the environment with continuous state and
action spaces, often in high-dimensional settings with complex joint
configurations.
Introduction…
● Robotic learning is constrained by real-time operation and the
risks associated with real-world trials.
● Transferring knowledge learned in simulation to real-world
robots is an active research area (sim-to-real problem).
● Prior knowledge about the robot, physical environment, and
tasks is essential for quick and safe learning in practical
robotic systems.
● Robotics encompasses various concepts such as probabilistic
state estimation, perception, planning, unsupervised learning,
reinforcement learning, and game theory.
Robot Hardware
Type of robots from the hardware
perspective
Manipulators: These are robot arms that can be
attached to a robot body or bolted onto a table
or floor. They vary in size and payload capacity,
from large industrial manipulator arms used in
car assembly to wheelchair-mountable arms
designed to assist people with motor
impairments.

Mobile Robots: These robots use wheels, legs, or


rotors for movement. Examples include
quadcopter drones used for aerial surveillance,
autonomous cars for transportation, and rovers
for exploration in challenging environments like
Mars.

Legged Robots: These robots are designed to


traverse rough terrain that is inaccessible to
wheeled robots. While they offer advantages in
navigating uneven surfaces, controlling their leg
movements can be more challenging compared
to wheels.
Sensing the World
How did robots sense the world

Passive Sensors: Cameras are examples of passive sensors that observe the
environment by capturing visual signals. They provide information about
objects, colors, shapes, and other visual characteristics.

Active Sensors: Active sensors emit energy into the environment and
measure the reflected or returned signals. Sonar, which uses sound waves,
and structured light, which projects patterns onto a scene, are examples of
active sensors that provide depth and distance information.

Range Finders: These sensors measure the distance to nearby objects. Sonar
sensors emit sound waves and measure the time and intensity of the
returning signal to determine the distance. Stereo vision, which uses multiple
cameras to compute depth from parallax, is another range-finding technique.
Location Sensors
Global Positioning System (GPS): GPS is commonly used for outdoor
localization. It works by measuring the distances to satellites and
triangulating the signals to determine the robot's absolute location on Earth.

Indoor Localization: In indoor environments where GPS signals are not


available, beacons placed at known locations or the analysis of wireless
signals from base stations can help robots determine their position.

Underwater Localization: Underwater robots rely on active sonar beacons


to estimate their location by measuring the relative distances to these
beacons.
Proprioceptive Sensors
Shaft Decoders: Shaft decoders are used to measure the precise
angular motion of robotic joints, providing information about joint
configuration and position.

Odometry: Odometry involves measuring the wheel revolutions of


mobile robots to estimate the distance traveled. However, odometry is
prone to errors due to wheel slippage and drift.

Inertial Sensors: Gyroscopes are examples of inertial sensors that


measure changes in velocity based on the resistance of mass to
acceleration. They help reduce positional uncertainty caused by
external forces.
Force and Torque Sensors
Force Sensors: These sensors enable robots to measure the
force exerted during gripping or contact with objects. They are
crucial for tasks that require delicate handling, preventing
excessive force that could damage objects.

Torque Sensors: Torque sensors measure the amount of force


applied for rotational motion or manipulation tasks. They help
ensure precise control and prevent excessive force that could
lead to unintended consequences.
Producing Motion
Actuators: Actuators are mechanisms that initiate motion in effectors.
Electric actuators are commonly used and provide rotational motion, such as
in robot arm joints. Hydraulic actuators use pressurized fluid, while
pneumatic actuators use compressed air to generate mechanical motion.

Joints: Joints connect rigid bodies and enable movement. Revolute joints
allow rotation between two links, while prismatic joints enable sliding motion.
There are also multi-axis joints, such as spherical, cylindrical, and planar
joints.

Grippers: Grippers are used by robots to interact with objects in the


environment. Parallel jaw grippers, with two fingers and a single actuator, are
simple and commonly used. Three-fingered grippers offer more flexibility,
while humanoid hands with multiple actuators provide complex manipulation
capabilities.
Producing Motion
Actuator Types: Electric actuators are widely used and rely on
electricity to spin up a motor. Hydraulic actuators use pressurized
hydraulic fluid, while pneumatic actuators use compressed air. The
choice of actuator depends on the specific application and the
required type of motion.

Complex Grippers: Humanoid hands, such as the Shadow Dexterous


Hand, have numerous actuators (up to 20) that provide a high
degree of flexibility for intricate manipulation tasks. While they offer
advanced capabilities, controlling these complex grippers can be
more challenging.
What kind of
problem is
robotics
solving?
Kind of problem robots
solve
In the context of robotics, the agent
software plays a crucial role in driving the
hardware to achieve desired goals. To
solve robotics problems, several
computational frameworks and
techniques are employed. Here are some
key points related to the software aspects
of robotics problem-solving:
Computational Framework: The choice of
computational framework depends on the nature
of the problem. Robotics problems are typically
characterized as nondeterministic, partially
observable, and multiagent. Depending on the
specific scenario, different frameworks can be
applied, such as MDPs (Markov Decision
Processes) for situations where the environment is
fully observable, POMDPs (Partially Observable
Markov Decision Processes) when there is partial
observability, and game theory for scenarios
involving multiple agents.
Cooperative and Competitive Scenarios: In
robotics, agents can engage in both
cooperative and competitive interactions. For
example, in a narrow corridor, a robot and a
person collaborate to avoid collisions. However,
in some cases, there might be competition to
reach goals efficiently. Understanding the
dynamics of cooperation and competition is
essential in formulating the problem and
designing effective strategies.
Reward Function: In many robotics
settings, the robot acts in service of a
human or a user, aiming to fulfill their
desires or objectives. The robot's reward
function is often designed to align with the
user's reward or utility function. Deciphering
the user's desires or preferences may require
the robot to gather information and adapt
its behavior accordingly.
Action, State, and Observation Spaces: The
representation of actions, states, and
observations in robotics can vary depending on
the specific application. In general, observations
are raw sensor data, such as images or laser hits,
actions correspond to control signals sent to
motors or actuators, and the state represents the
information necessary for decision-making.
Bridging the gap between low-level percepts and
high-level plans is a challenge, and roboticists
employ techniques to simplify and decouple
different aspects of the problem.
Hierarchical Planning: Robotics often
employs a hierarchical planning approach to
manage complexity. A three-level hierarchy is
commonly used, including task planning,
motion planning, and control. Task planning
involves defining high-level actions or subgoals,
motion planning focuses on finding paths to
achieve subgoals, and control deals with
executing the planned motion using actuators.
This hierarchical structure helps in managing
complex tasks and enables efficient planning
and control.
Preference Learning and People
Prediction: To enhance the robot's
capabilities, preference learning is used to
estimate the end user's objectives or
preferences. People prediction techniques
are employed to forecast the actions of
other people in the robot's environment.
These aspects contribute to determining
the robot's behavior and decision-making
processes.
Integration of Components: While breaking
down the problem into separate components
reduces complexity, there is also a need for
integration to exploit the potential synergies
between different aspects. Action can influence
perception, and decisions at different levels
should be coherent and compatible.
Reintegrating perception, prediction, and action
can improve the overall performance and
enable a more closed feedback loop in the
robot's decision-making process.
Robotic Perception
Robotic Perception
● Perception is the process of mapping sensor measurements to internal
representations of the environment in robots.
● Robotics perception involves dealing with various sensors like cameras,
lidar, and tactile sensors.
● Challenges in perception include sensor noise, partial observability,
unpredictability, and dynamic environments.
● Good internal representations in robotics have three properties: they
contain enough information for decision-making, they are structured
for efficient updating, and they correspond to natural state variables in
the physical world.
Robotic Perception
● Techniques like Kalman filters, HMMs, and dynamic Bayes nets
can represent transition and sensor models in partially
observable environments.
● The belief state, which is the posterior probability distribution
over environment state variables, is updated using past
observations and actions.
● The recursive filtering equation is modified to use integration
for continuous variables.
● The posterior estimation involves conditioning on both actions
and observations.
● The transition model (motion model) and sensor model are
used to calculate the posterior probability distribution.
We would like to compute the new belief state,

P(Xt+1 | z1:t+1,a1:t), from the current belief state,

P(Xt | z1:t,a1:t−1), and the new observation zt+1.

but there are two differences here: we condition


on the actions as well as the observations,

and we deal with continuous rather than


discrete variables. Thus, we modify the recursive
filtering equation to use integration rather than
summation:

P(Xt+1 | z1:t+1,a1:t)

= αP(zt+1 | Xt+1)Z P(Xt+1 | xt,at) P(xt | z1:t,a1:t−1)


dxt .
Localization and mapping
Localization: Finding the robot's position
and constructing a map

We use this method:-

Finding the robot's position in a two-


dimensional environment with a known
map.

Tracking the robot's pose using Cartesian


coordinates (x, y) and heading θ.

Example: Mobile robot in a flat 2D world


with an exact map of the environment.

Xt = (xt, yt ,θt)T
Localization and mapping
Kinematic Approximation: Modeling robot
motion using translational and rotational
velocities

Modeling robot motion with


instantaneous translational and
rotational velocities.

Approximating robot motion over small


time intervals using a deterministic model.

Example: Robot's motion given by f(Xt, vt,


ωt) = Xt + [vt∆t cosθt, vt∆t sinθt, ωt∆t].
Localization and mapping
Sensor Models: Landmark-based and Range-scan sensor models
with Gaussian noise assumptions

Landmark-based Sensor Model:

Predicting range and bearing to known landmarks using geometry.

Accounting for Gaussian noise in measurements.

Range-scan Sensor Model:

Using a sensor array of range sensors with fixed bearings.

Computing range values corrupted by Gaussian noise.

Examples: Landmark range and bearing prediction, Range-scan


measurements.
Localization and mapping
Monte Carlo Localization (MCL): Particle filtering
algorithm for estimating robot position

Particle filtering algorithm for estimating robot


position.

Representing the belief state using a collection of


particles.

Updating particle weights based on sensor


measurements and resampling particles.

Example: MCL algorithm using range-scan sensor


model.
Localization and mapping
Kalman Filter: Gaussian-based filtering algorithm,
effective for linear models (linearization required for
nonlinear models)

Gaussian-based filtering algorithm suitable for linear


models.

Representing the belief state as a multivariate Gaussian.

Linearizing motion and sensor models using extended


Kalman filter (EKF).

Example: Kalman filter-based localization with linear


motion and measurement models.
Localization and mapping
Simultaneous Localization and Mapping
(SLAM):
Solving the problem of building a map while
localizing the robot.
Building a map while localizing the robot.
Techniques include EKF with augmented state
space, graph relaxation methods, and
expectation-maximization.
Example: SLAM algorithm integrating localization
and mapping tasks.
Other Types of Perception
In addition to localization and mapping, robots perceive temperature, odors, sound
and more.

These can be estimated using dynamic Bayes networks and sensor models.

Reactive Agents and Probabilistic Approaches

Robots can be programmed as reactive agents without probabilistic reasoning.

Probabilistic techniques excel in challenging perceptual problems like localization a


mapping.

Monte Carlo Localization

Monte Carlo localization uses particle filtering for mobile robot localization.

It improves position estimation as the robot navigates and receives sensor


measurements.
Other Types of Perception…
Extended Kalman Filter (EKF) Localization

The EKF combines motion models and sensor measurements for localization.

It reduces uncertainty by incorporating landmark observations.

Choosing the Right Approach

Probabilistic techniques excel in challenging perceptual problems.

Simpler solutions may be effective in practice.

Experience with real physical robots guides the choice of approach.


Planning &
Control
• Configuration
Space

• Motion Planning

This Photo by Unknown author is licensed under CC BY-NC-ND.


Planning

involves determining what the robot needs to do


to achieve its goal.

Control

involves executing the planned actions to move


the robot towards its goal.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Configuration
Space
❑Configuration space also known as C-space
or the space of configurations is
a mathematical- abstraction that
represents all possible configurations of
a robot.

❑ Set of all possible configurations of a


robot, where a configuration is defined
by the position and orientation of each
of the robot's joints.
This Photo by Unknown author is licensed under CC BY-NC-ND.
Cont...
C-space can be thought of as a high-
dimensional space, where each
dimension corresponds to a degree of
freedom of the robot.
Example:
a simple two-joint robot arm has a 2D C-
space
while a more complex robot with many
joints might have a much higher-
dimensional C-space.
Motion Planning
❑is the process of determining a sequence of motions that
will enable a robot to achieve a specific task or goal.

The goal
❑to find a sequence of actions that will take the robot from
its current position to a desired goal position, while
avoiding obstacles and adhering to constraints such
as speed limits and collision avoidance.

This Photo by Unknown author is licensed under CC BY-NC-ND.


cont...
Applications
mobile robots
manipulators
drones
autonomus vehicles
warehoue automations....

This Photo by Unknown author is licensed under CC BY-NC-ND.


Planning Uncertain Movement
❑the process of developing strategies and algorithms that can
handle uncertainty in the robot's environment and the
effects of its actions.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Cont...

The goal
enable robots to navigate through environments that are
uncertain or unpredictable
improve the reliability and safety of robotic systems
enable robots to operate in a wider range of environments
Overall
improve the ability of robots to operate safely and effectively
in real-world environments

This Photo by Unknown author is licensed under CC BY-NC-ND.


Cont...
Uncertainty
● the real world is complex, dynamic(constantly changing), and unpredictable
events(uncertain outcomes).
Arises from
partial observability of the environment
noise in sensor measurements
actuator variability:
modeling errors
unpredictable external events
stochastic effects of the robot's own actions
This Photo by Unknown author is licensed under CC BY-NC-ND.
Sensor noise and errors
❑Sensors used to measure the robot's environment can produce
noisy or uncertain measurements due to factors such as signal
interference, environmental factors, or imprecise sensing
technology.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Actuator variability
❑ The actuation system of a robot, including motors, gears,
and other mechanical components, can exhibit variability
or uncertainty due to friction, wear and tear, or other
factors.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Modeling errors

❑ Even with accurate sensors and actuators, models


of the robot's environment and dynamics can still
exhibit uncertainty, as the real world is inherently
complex and difficult to model accurately.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Unpredictable external events

• The environment in which a robot operates can


be highly variable and unpredictable, with
factors such as weather, lighting conditions,
and other environmental factors affecting the
robot's performance.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Stochastic effects of the robot's
actions

❑The actions of a robot can have unpredictable


effects due to factors such as the stochastic
behavior of sensors and actuators or the
inherent uncertainty in the dynamics of the
robot's motion.

This Photo by Unknown author is licensed under CC BY-NC-ND.


How to handle
• Policies and online replanning: new plan on-the-fly as the
robot is operating in the environment.
• Information gathering actions: more information about
the environment, even if those actions are not directly
related to the current task at hand.
• Model Predictive Control (MPC): involves planning for
a shorter time horizon, typically a few seconds into the
future because the robot may receive new information
from its sensors, or may end up in a different state than
expected due to uncertainty in the environment or its
own actions.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Reinforcement Learning(RL) in Robotics

RL
• a type of machine learning that involves an agent learning to make a
sequence of decisions in an environment to maximize a cumulative
reward signal(how well the agent is performing the task it is trying to
learn). The agent interacts with the environment by taking actions, and
receives feedback in the form of rewards or penalties based on the
quality of its actions.
• well-suited for controlling robots. In RL, an agent learns to make
decisions by interacting with an environment and receiving feedback in
the form of rewards or penalties.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Cont...

● type of machine learning that uses trial-and-error learning to enable an agent,


such as a robot, to learn to maximize its cumulative reward over time by
interacting with an environment.
RL
⮚ can be used to train a robot to perform a task, such as walking, grasping
objects, or navigating a maze.
⮚ also be used to optimize the behavior of a robot controller: by formulating the
control problem as a reinforcement learning problem, the controller can learn
to adapt to changing conditions and optimize its performance over time.
⮚ enabling robots to learn complex behaviors and adapt to changing
environments without explicitly programming every possible scenario.

This Photo by Unknown author is licensed under CC BY-NC-ND.


Humans and Robots
Humans and robots are both entities that can interact with each other in
various ways.

Robots are designed and built to assist humans in specific tasks, and they
can work with humans in various environments.

However, building robots that can work effectively with humans can be
challenging as it requires coordination and cooperation between the robot
and humans.

The reward function of the robot needs to align with the actions and goals of
the humans, and the robot must choose its actions in a way that meshes well
with those of the humans. Overall, humans and robots can work together in a
collaborative manner to achieve common goals.
Coordination
When building robots to work around humans, the coordination problem
arises - the robot's reward depends not only on its own actions but also on the
actions of the humans in the same environment.

Cooperation
cooperation requires that robots and humans can communicate with each
other, trust each other's decision-making and actions, ensure safety, and
protect privacy and security

The development of cooperative robotics requires the integration of many


different technologies, including perception, planning, control, and
communication.
To achieve cooperation, robots must be designed to understand human
language and gestures, and humans must be able to understand the robot's
behaviour and capabilities.

Humans as approximately rational agents


Humans are considered as approximately rational agents, which means they
tend to make decisions that are consistent with rationality and optimize their
outcomes given the information available to them.
Predicting human action
Predicting human actions is challenging as they are influenced by the actions
of the robot, and vice versa

To overcome this, robots sometimes assume that the person is ignoring the
robot and model the person as noisily optimal with respect to their objective.

Splitting prediction from action makes it easier for the robot to handle
interaction with humans, but it can lead to a sacrifice in performance, much
like splitting estimation from motion or planning from control.
The previous picture shows:- the robot is tracking a human’s
location and as the human moves, the robot updates its belief over
human goals. As the human heads toward the windows, the robot
increases the probability that the goal is to look out the window, and
decreases the probability that the goal is going to the kitchen, which
is in the other direction.

This is how the human’s past actions end up informing the robot
about what the human will do in the future. Having a belief about the
human’s goal helps the robot anticipate what next actions the
human will take. The heatmap in the figure shows the robot’s future
predictions: red is most probable; blue least probable.
● Making predictions by assuming that people are noisily rational
given their goal: the robot uses the past actions to update a
belief over what goal the person is heading to, and then uses
the belief to make predictions about future actions.

● (a) The map of a room.


● (b) Predictions after seeing a small part of the person’s
trajectory (white path);
● (c) Predictions after seeing more human actions: the robot now
knows that the person is not heading to the hallway on the left,
because the path taken so far would be a poor path if that were
the person’s goal.
Human
predictions
about the robot
Human predictions about the robot
Human predictions about the robot refer to how humans anticipate
the behaviour and actions of robots in various situations. These
predictions can be influenced by factors such as the robot's
appearance, behaviour, and past interactions with humans.
incomplete information in human-robot interactions can be two-sided,
meaning that the robot does not know the human's objective, and the
human does not know the robot's objective.

Humans as black box agents


Humans can also be modeled as black box agents, where the robot
does not need to treat humans as objective-driven, intentional agents
to coordinate with them. In this model, the human policy, πH, is treated
as a black box that affects the environment dynamics.
Learning to do what humans want
In robotics, the problem of generating good behaviour is reduced to specifying a good
reward or cost function. However, getting the cost function right is challenging,
especially for autonomous cars where the designer needs to balance different
components such as safety, comfort, and obeying traffic laws. As end users have
different preferences, it is difficult to design a cost function that satisfies everyone's
needs.

To address this challenge, two alternatives are explored.


The first is to learn the cost function from human input, where the robot can learn from
human feedback and preferences to adjust its cost function.
The second is to bypass the cost function and imitate human demonstrations of the
task, where the robot learns by observing humans perform the task and imitating their
actions. These approaches can help ensure that the robot's behaviour matches what
humans actually want it to do.
Learning policies directly via imitation
This approach involves collecting data on human demonstrations of the task and using this
data to train a policy that can perform the task autonomously.
It is a powerful approach as it allows the robot to learn from human expertise and perform the
task in a way that aligns with human behaviour.
a challenge with this approach is in generalization to new states. The robot does not know why
the actions in its database have been marked as optimal and has no causal rule to follow. It
can only run a supervised learning algorithm to try to learn a policy that will generalize to
unknown states, but there is no guarantee that the generalization will be correct.

A human teacher pushes the


robot down to teach it to stay closer
to the table. The robot appropriately
updates its understanding of the desired
cost function and starts optimizing it.
Alternative
Robotic
Frameworks

alternative robotic
frameworks that depart from
the traditional rational agent
approach and offer different
ways of conceptualizing and
designing robots.
Reactive controllers

Reactive controllers are a type of controller used in robotics that


generate actions in response to the immediate sensory input. They
are designed to be fast, efficient, and able to respond quickly to
changes in the environment. Reactive controllers operate without a
model of the environment or any internal representation of the
robot's state. Instead, they rely on direct sensor input to generate
actions.
Subsumption architectures
The subsumption architecture is a framework for creating reactive
controllers for robots using finite state machines. These machines
can contain tests for sensor variables, timed arcs, and messages
sent to the robot's motors or other machines.

The subsumption architecture allows the composer to build complex


controllers from simpler ones in a bottom-up fashion.

However, this approach has limitations, including the need for


reliable sensor data and the difficulty of encoding complex policies
explicitly.

Despite these limitations, the subsumption architecture remains a


popular approach to robotics, and there are ongoing debates about
the best approach to robotics.
Application Domains
Robotic technology is already permeating our world, and has the
potential to improve our independence, health, and productivity.

Robots have a wide range of application domains, and their use is


increasing rapidly due to advancements in technology and
increasing demand for automation.

Home care
Robots are being developed for personal use in homes to assist older
adults and people with motor impairments with daily tasks, enabling
them to live more independently. These robots are becoming more
autonomous and may soon be operated by brain-machine
interfaces.
Health care

Robots are used in healthcare for tasks such as surgery,


rehabilitation, and patient care. They can assist surgeons in
performing complex procedures with greater precision, and they can
help patients with mobility and rehabilitation.
Services
Robots are used in service and personal assistance for tasks such as
cleaning, cooking, and companionship. They can assist people with
disabilities or elderly individuals who require assistance with daily tasks.
Entertainment
Robots are used in entertainment for tasks such as theme park rides,
animatronics, and interactive exhibits. They can provide immersive
experiences for visitors and can be programmed to perform a wide range
of movements and actions.
Exploration and hazardous environments
Robots are used in exploration for tasks such as space exploration, deep-
sea exploration, and archaeological expeditions. They can be used to
gather data and samples from environments that are difficult or
impossible for humans to reach.
Summary
Robotics involves physically embodied agents that interact with the physical
world. The most common types of robots are manipulators and mobile robots,
which have sensors and actuators to perceive and affect the world.
The robotics problem involves stochasticity, partial observability, and
interacting with other agents, which can be addressed using techniques such as
MDPs, POMDPs, and game theory.
perception and action are typically decoupled in robotics, with perception
involving computer vision, localization, and mapping. Probabilistic filtering
algorithms such as particle filters and Kalman filters are useful for robot
perception. Motion generation involves configuration spaces, motion planning,
and trajectory tracking control.
CREDITS: This presentation template was
created by Slidesgo, including icons by
Flaticon, infographics & images by Freepik
and illustrations by Stories
CREDITS: This presentation template was
created by Slidesgo, including icons by
Flaticon, infographics & images by Freepik
and illustrations by Stories

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy