0% found this document useful (0 votes)
31 views8 pages

Real-Time Process-Level Digital Twin For Collaborative Human-Robot Construction Work

This document describes a proposed framework for real-time process-level digital twinning to enable collaborative human-robot construction work. The framework uses an immersive virtual reality digital twin combining the as-designed BIM model and evolving as-built workspace geometry from on-site sensors. Humans can use the digital twin to remotely demonstrate tasks to robots, which then plan their motions and communicate back for approval before executing. A case study tests the system with imperfect rough carpentry and a robot arm installing drywall.

Uploaded by

danang januari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views8 pages

Real-Time Process-Level Digital Twin For Collaborative Human-Robot Construction Work

This document describes a proposed framework for real-time process-level digital twinning to enable collaborative human-robot construction work. The framework uses an immersive virtual reality digital twin combining the as-designed BIM model and evolving as-built workspace geometry from on-site sensors. Humans can use the digital twin to remotely demonstrate tasks to robots, which then plan their motions and communicate back for approval before executing. A case study tests the system with imperfect rough carpentry and a robot arm installing drywall.

Uploaded by

danang januari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

Real-Time Process-Level Digital Twin for Collaborative


Human-Robot Construction Work
X. Wang𝑎 , C. J. Liang𝑎 , C. C. Menassa𝑎,𝑏 , and V. R. Kamat𝑎.𝑏
𝑎 Department of Civil and Environmental Engineering, University of Michigan, USA
𝑏 Robotics Institute, University of Michigan, USA

wangix@umich.edu, cjliang@umich.edu, menassa@umich.edu, vkamat@umich.edu

Abstract - fatality and injury rates in construction workers [4, 5]. In


Widespread use of autonomous robots in on-site construc- addition, the productivity of the construction industry has
tion has been limited because it is impractical to prepro- barely increased over the past few decades [6]. More re-
gram robots to perform quasi-repetitive tasks due to the rel- cently, the outbreak of the Covid-19 pandemic has caused
atively loose work tolerances and deviations of as-built work serious economic impact and schedule delays on construc-
from the project design. Robotization of field construction tion projects since it is hard to maintain social-distancing
work must thus be conceived as a collaborative human-robot while working in close proximity on construction sites [7].
endeavor capable of planning and improvising during the This has highlighted the need for construction techniques
performance of construction tasks. Although humans can that can allow workers to perform tasks remotely, allow-
control robot motion through teleoperation, it is often im- ing for reduction in the number of on-site workers or their
practical due to the range of a robot’s motion and associated physical separation while on site.
safety issues arising from heavy or large construction mate-
Robots can manipulate heavy objects and could poten-
rials. An intuitive and safe bi-directional interface is thus
tially relieve construction workers from excessive physi-
needed to enable construction robots to seamlessly interact
cal demand, alleviate labor shortage, increase productiv-
with and partner with human co-workers. This paper pro-
ity, and promote remote construction. Although robots
poses a framework that allows human-robot interaction and
have already boosted the productivity of several indus-
collaboration within a real-time, process-level, immersive vir-
tries, some attributes of the construction industry inhibit
tual reality (VR) digital twin that is created by combining the
the wide application of construction robots [8]. First, the
as-designed BIM model and the evolving as-built workspace
unique and static nature of the construction product re-
geometry obtained from on-site sensors. Humans can use
quires robots being able to move to the workspace, accu-
the digital twin to remotely demonstrate a task plan to the
rately localize themselves, and conduct a series of different
robot. The robot understands the communicated objectives
actions on the product [9, 10]. Second, the unstructured
and plans its motion to complete the task, which is commu-
construction site limits the workspace of the robot and adds
nicated back through the system for human evaluation and
to the difficulty of robot motion planning and localization
approval before the robot executes the task. A case study
[11, 12]. Third, the moving workers, components, and
involving imperfect rough carpentry (i.e., stud framing) and
construction equipment require robots to be able to com-
a 6DOF KUKA drywall-installing robot arm is conducted to
prehensively perceive the environment and make quick
demonstrate and evaluate the digital twin system.
responses [13].
Keywords - In addition, construction work has relatively loose tol-
Improvisation; Digital twin; Virtual reality; Human-robot erances [14, 15]. The evolving as-built structure and some
interaction construction materials may deviate from designed geom-
etry, which requires adjustment of high-level task plans
accordingly [16]. Although the recent development of
1 Introduction
artificial intelligence algorithms allows robots to be pro-
The construction industry is one of the largest sectors of grammed with adaptability, it is not cost-effective or prac-
the economy, accounting for up to 13% of GDP worldwide tical to equip and program construction robots with such
[1]. However, as one of the most labor-intensive indus- high perception ability and adaptivity to cope with all po-
tries, the construction industry is suffering from shortage tential issues on construction sites [17]. Human-robot col-
and aging of the labor force [2, 3]. On one hand, the laboration (HRC) combines human beings’ cognitive abil-
construction site is unstructured and dynamic. On the ity with robots’ competency in power, speed, and accuracy,
other hand, the construction work imposes considerable and has thus become a promising solution for robotizing
physical demands on workers. These facts lead to high construction work.

1528
37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

Several HRC methods have been adopted in the con- the Oculus Touch controllers are used to create the VR
struction industry. An intuitive method for collaborative experience. The immersive VR interface is connected to
human-robot construction is to lead the robot by directly the robot operation environment (i.e. the construction site
applying forces to the robot or the object carried by the environment in which the robot performs the task) via the
robot through physical contacts, such as MULE135 (Mate- Robot Operating System (ROS) as the computational core.
rial Unit Lift Enhancer) [18] and curtain wall installation The computational core is responsible for computation
robot [13]. It relieves construction workers from high and data processing. It also acts as the communication
physical stress while retains their operation agility. How- tool between the human and the robot. In this section,
ever, it still requires human workers to be present alongside the immersive VR interface and the computational core
the robot. Considering the needs of performing construc- are discussed in detail. The operation environment is
tion work remotely, several teleoperation techniques have discussed later in the case study.
been proposed for construction robotics, such as joysticks
[19], haptic devices [20], wearable sensors [21], and vi- 2.1 Immersive VR interface
sion detection systems [22]. Although teleoperation can 2.1.1 Immersive VR environment
protect workers from potential dangers on-site, operating
robots with multiple degrees of freedom (DOFs) requires The immersive VR environment is the digital twin of
expertise. The robot is moving at the same time as human the construction environment. There are two common
operation and the human needs to figure out and lead the methods of developing the VR model of a construction
robot through the full manipulation path. There are also site. One of them is to use the 3D CAD model, such as
safety issues caused by the limited perception of working Building Information Modeling (BIM) [26]. It is fast and
environments [13]. Recently, the emergence of commer- convenient to be loaded as a VR scene but it cannot reflect
cial head-mounted devices promoted the application of actual construction site environment since the built struc-
immersive virtual reality (VR), augmented reality (AR), ture could deviate from design and there would be obsta-
and mixed reality (MR) in HRC. For example, VR has been cles stacking on-site during construction. Another method
used to study worker reactions while sharing workspaces is to construct point clouds from laser scanners or RGBD
with robots and AR has been used to give worker instruc- cameras [27]. However, it takes significant computational
tions to cooperate with robots [23, 24]. Therefore, a safe resources to construct the point cloud of a construction site
and intuitive HRC interface for construction robots that and use it in VR. Therefore, this research uses a combina-
takes advantage of immersive technologies and allows re- tion of the as-design BIM model and as-built point clouds
mote operation is proposed. of workspace obtained from the sensors to create the VR
The objective of this paper is to propose a real-time, digital twin of the construction site (Figure 2).
process-level, immersive VR digital twin for intuitive and The general construction site environment is generated
remote human-robot collaborative construction work. The from the BIM model. For the non-critical components, the
human worker performs high-level decision making and BIM models are directly loaded and used in VR. It cre-
supervision in an immersive VR digital twin of the con- ates a realistic construction environment VR experience.
struction site. The robot is responsible for detailed motion The non-critical components indicate components that are
planning and task execution on-site. The detailed motion outside the robot workspace or components that are inside
plan and robot status information are visualized in VR for the robot workspace but their deviations from design do
human approval before actual execution. A case study not influence user decision making and robot execution
involving imperfect rough carpentry (i.e., stud framing) processes. The BIM models of the critical components
and a 6DOF KUKA drywall-installing robot arm is con- are set as semi-transparent so that the user can visualize
ducted to demonstrate and evaluate the digital twin system. how the structure is designed and supposed to be built.
The construction site and robot arm are emulated in the Meanwhile, the robot workspace is captured by RGBD
Gazebo simulator that allows rapid prototyping of robotic cameras placed on the construction site. The RGBD im-
tasks and direct subsequent transfer of the methods to the ages are sent to the computational core for processing
corresponding real robotic platforms [25]. and then transferred to Unity3D for visualization in VR
in near real-time. The point cloud overlays the semi-
transparent as-design BIM model so the differences be-
2 Collaborative Human-Robot tween the as-design and as-built geometry can be in-
Construction System
spected. Point clouds can also capture the dynamic condi-
Figure 1 gives an overview of the proposed collaborative tions in robot workspace, such as workers and obstacles,
human-robot construction framework. The human worker and show it in the VR. The human worker can view the
interacts with the robot through an immersive VR interface as-built workspace conditions for decision making, such
developed in Unity3D. The Oculus Rift S VR headset and as deciding how and where to install the next component.

1529
37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

Figure 1. Collaborative human-robot construction system overview

Figure 2. Immersive VR environment construction

2.1.2 Robot digital twin model


There are two full-scale robot models in the VR scene,
which overlap with each other in the original state. One
of them shows the planning state of the robot. It is used to
Figure 3. VR robot models (a) “planning” robot (b)
visualize the robot motion plan (Figure 3(a)). The other “execution” robot
one shows the actual state of the robot for execution sta-
tus visualization (Figure 3(b)). The two robot models are
referred to as the “planning” robot and the “execution”
robot respectively in the rest of this paper. The KUKA Our digital twin system includes several interactive VR
robot arm model is built in Unified Robotics Description elements. An interactive billboard with two functions has
Format (URDF) in the ROS computational core, which has been developed. First, it shows the user system messages
the same size and configuration as the actual robot [28]. that cannot be directly obtain even from the actual con-
The model is then transferred from ROS to be loaded as a struction environment, such as warning messages from
game object in VR using the ROS# library [29]. The VR ROS. Second, the billboard can also be used as an input
robot models preserve the kinematic and dynamic prop- device inside VR where the user can give commands to
erties of the robot and can be controlled by subscribing the system by interacting with the buttons on the screen.
messages from the computational core. Users’ sight could easily be occluded in complex con-
struction environments. Therefore, instead of fixing the
2.1.3 Interactive VR elements and functions billboard to one location, our system allows users to ma-
nipulate it with the VR controller and adjust its pose and
One of the advantages of immersive VR is that the user view at their convenience. The billboard can be suspended
can have realistic experience while overcoming some re- in the air as how it has been placed. As the environment
strictions of the real world. For example, users can receive changes, users can always put the billboard at a new de-
extra information that they cannot directly achieve from the sirable position. Some interactive construction materials
real world, such as the comparison between the as-design have been created for pick-and-place related tasks, which
and as-built geometry, and overcome some real-world con- can also be grabbed and suspended in the air, for the user
straints, like gravity. to perform high-level task planning. It should be noted

1530
37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

that although the paper mainly discussed pick-and-place 2.2.3 Motion planning
related cases, the system can be generalized to many con-
After receiving the user-specified task plan, the corre-
struction tasks by adding customized interactive elements
sponding end-effector pose is calculated. The robot then
and functions.
plans a trajectory to that pose so that both the robot it-
self and the object carried by the robot do not collide
2.2 Computational Core with the environment. The motion planning is conducted
by MoveIt, a robotics manipulation platform in ROS [32].
ROS has been used as the system computational core. The point cloud after processing discussed earlier is further
ROS is an open-source system that combines a variety of processed by OctoMap into a 3D occupancy grid map of
tools and software libraries for robot operation [30]. It can the environment [33]. The Open Motion Planning Library
communicate with Unity3D, Gazebo, and the actual robot. is used as the motion planner and the Flexible Collision
In our system, ROS is also responsible for sensor data Library is used for collision detection [34, 35]. The inverse
processing, motion planning, and robot control besides kinematics is calculated by the Kinematics and Dynamics
communication. Library numerical jacobian-based solver [36]. The joint
velocity and acceleration limits are taken into considera-
2.2.1 Communication tion to time-parameterize the generated path. After that,
the time-parameterized path is sent to Unity3D as separate
The system communication framework is shown in Fig- states for visualization on the “planning” robot.
ure 4. ROS# is used for communication between ROS and
Unity3D [29], and gazebo_ros_pkgs is used to interface 2.2.4 Robot Control
Gazebo with ROS [31]. As the program starts, Gazebo
starts to constantly publish sensor data and robot states to When the user approves the trajectory plan in VR, ROS
ROS. In the meantime, ROS processes the sensor data and will be notified with an approval message. The ros_control
publish the processed data and robot states to Unity3D, package is then used to convert the approved trajectory plan
which is then visualized as the point cloud and the state into robot control commands [37]. It obtains joint state
of the “execution” robot in VR. Based on the point cloud, data from the encoders of robot actuators and generates
the user develops the task plan and sends it to ROS after output with PID controllers to robot actuators.
confirmation. ROS then generates a collision-free motion
plan accordingly. 2.3 Case Study
The motion plan is sent back to Unity3D and is visual- A drywall installation case study with a 6DOF KUKA
ized by the user on the “planning” robot. If the user is not robot arm that is capable of real construction work has been
satisfied with the motion plan, they can either adjust their conducted to evaluate the immersive digital twin system.
task plan or request another motion plan from ROS which The user guides the robot arm to pick up a drywall panel
in turn generates a new motion plan in response. Upon placed on the ground near the robot and place it on a wall
user approval, a message is sent to ROS which converts the frame that is built with deviations from design. Figure 5
motion plan into execution commands to control the ac- shows the robot operating environment in Gazebo, which
tual robot. As the actual robot executes the work, updated represents the actual construction site, and its VR digital
robot state messages are received by ROS and Unity3D. twin in Unity3D. Three Microsoft Kinect cameras are used
The “execution” robot in VR moves accordingly. to capture the robot workspace environment in Gazebo.
Figure 6 shows the point cloud before and after process-
2.2.2 Sensor data processing ing. Points on the ground panel are also removed with the
RANSAC plane segmentation algorithm. In VR, a drywall
Several Microsoft Kinect cameras are placed on the panel is set to be the interactive construction component,
virtual construction site in Gazebo to capture robot which is in the same shape as the actual drywall. The user
workspace. The RGBD images captured are converted will first observe the wall frame geometry from the point
into point clouds. Point clouds from different cameras cloud and decide how to install the drywall panel onto the
are transformed into the world frame based on respective frame. The user can then demonstrate the task plan by
camera positions and rotations and then concatenated into grabbing the interactive panel and placing it at the desired
one single point cloud. The point cloud is then downsam- installation position. The buttons on the interactive bill-
pled with the voxel grid filter. Finally, it goes through the board provide options for fast and accurate adjustment of
self-filtering process. Self-filter removes visible parts of the orientation of the interactive panel.
the robot from the point cloud based on the current robot The robot will first pick up the drywall panel on the floor
state. and then wait for the user to specify the task plan. The

1531
37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

Figure 4. System communication framework

Figure 5. (a) Robot operation environment (b) VR environment

user can know whether the robot has successfully picked Upon approval, ROS controls the actual robot to execute
up the panel from the billboard and the panel will change the approved motion plan and update the user with execu-
color after being picked up. After the user confirms the tion status messages. At the same time, the “execution”
task plan, ROS starts to develop the detailed motion plan robot is synchronized with the actual robot by subscrib-
to place the panel to the user-specified position while send- ing to the actual robot state messages so that the user can
ing planning status messages (e.g. in progress, success, perceive actual robot status from VR (Figure 8).
reasons of failure) to the user via the billboard. After mo-
tion planning, the “planning” robot demonstrates the plan
to the user while the actual robot stays still (Figure 7).

1532
37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

3 Conclusion
In this study, a real-time, process-level, immersive digi-
tal twin system for collaborative human-robot construction
work is proposed. The system has several advantages.
First, human workers can visualize construction site
conditions and collaborate with the robot remotely, which
protects them from potential dangers on the construction
site. Second, the communication network allows the
human worker and the robot to exchange task plans and
Figure 6. Point cloud (a) before processing (b) after status information in near real-time. Third, it allows the
processing
human worker to improvise high-level construction plans
based on as-built construction site geometry. Last, the
robot develops its motion plan and carries out physical
construction work on-site, which significantly reduces
human workload. In ongoing work, our research team
is experimenting with real robots and implementing the
immersive digital twin system with mobile robot arms.

Acknowledgements

The authors would like to acknowledge the financial


support received from the U.S. National Science Foun-
dation (NSF) CBET 1638186 and CBET 1804321. Any
opinions and findings in this paper are those of the authors
and do not necessarily represent those of the NSF.

References
[1] Jan Mischke Maria João Ribeirinho Mukund Srid-
Figure 7. "Planning" robot demonstrating motion har Matthew Parsons Nick Bertram Filipe Barbosa,
plan Jonathan Woetzel and Stephanie Brown. Reinvent-
ing construction: A route to higher productivity.
On-line: https://www.mckinsey.com/~/media
/McKinsey/Industries/Capital%20Project
s%20and%20Infrastructure/Our%20Insight
s/Reinventing%20construction%20through
%20a%20productivity%20revolution/MGI-R
einventing-construction-A-route-to-hig
her-productivity-Full-report.ashx, 2017.
[2] Meiyin Liu. Video-Based Human Motion Capture
and Force Estimation for Comprehensive On-Site Er-
gonomic Risk Assessment. PhD thesis, 2019.
[3] CJ Liang, VR Kamat, and CC Menassa. Teach-
ing robots to perform construction tasks via learn-
ing from demonstration. In ISARC. Proceedings
of the International Symposium on Automation and
Robotics in Construction, volume 36, pages 1305–
Figure 8. Synchronized movement of (a) actual robot 1311. IAARC Publications, 2019.
and (b) execution robot [4] CPWR. The construction chart book. On-
line: https://www.cpwr.com/publications/
research-findings-articles/constructio
n-chart-book, 2018.

1533
37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

[5] CJ Liang, KM Lundeen, W McGee, CC Menassa, [14] Ci-Jyun Liang, Vineet R Kamat, and Carol C
S Lee, and VR Kamat. Stacked hourglass networks Menassa. Teaching robots to perform quasi-
for markerless pose estimation of articulated con- repetitive construction tasks through human demon-
struction robots. In 35th International Symposium stration. Automation in Construction, 120:103370,
on Automation and Robotics in Construction, 2018. 2020.

[6] Juan Manuel Davila Delgado, Lukumon Oyedele, [15] Colin Milberg and Iris Tommelein. Role of toler-
Anuoluwapo Ajayi, Lukman Akanbi, Olugbenga ances and process capability data in product and
Akinade, Muhammad Bilal, and Hakeem Owolabi. process design integration. In Construction Research
Robotics and automated systems in construction: Congress: Wind of Change: Integration and Inno-
Understanding industry-specific challenges for adop- vation, pages 1–8, 2003.
tion. Journal of Building Engineering, 26:100868,
2019. [16] Kurt M Lundeen, Vineet R Kamat, Carol C Menassa,
and Wes McGee. Autonomous motion planning and
[7] ENR. Construction loses 975,000 jobs in april, due task execution in geometrically adaptive robotized
to covid-19 impacts. On-line: https://www.enr. construction work. Automation in Construction, 100:
com/articles/49333-construction-loses- 24–45, 2019.
975000-jobs-in-april-due-to-covid-19-i
[17] Yap Hwa Jen, Zahari Taha, and Lee Jer Vui. Vr-based
mpacts, 2020.
robot programming and simulation system for an in-
[8] Kurt M Lundeen, Vineet R Kamat, Carol C Menassa, dustrial robot. International Journal of Industrial
and Wes McGee. Scene understanding for adaptive Engineering, 15(3):314–322, 2008.
manipulation in robotized construction work. Au-
[18] Construction Robotics. Mule. On-line: https://
tomation in Construction, 82:16–30, 2017.
www.construction-robotics.com/mule/, Ac-
[9] Toshio Fukuda, Yoshio Fujisawa, Fumihito Arai, cessed: 06/01/2020.
H Muro, K Hoshino, Kenji Miyazaki, and K Uehara. [19] Kyungmo Jung, Baeksuk Chu, Shinsuk Park, and
A new robotic manipulator in construction based on Daehie Hong. An implementation of a teleoperation
man-robot cooperation work. In Proc. of the 8th In- system for robotic beam assembly in construction.
ternational Symposium on Automation and Robotics International Journal of Precision Engineering and
in Construction, pages 239–245. Citeseer, 1991. Manufacturing, 14(3):351–358, 2013.
[10] Chen Feng, Yong Xiao, Aaron Willette, Wes McGee, [20] P Chotiprayanakul, DK Liu, and G Dissanayake.
and Vineet R Kamat. Vision guided autonomous Human–robot–environment interaction interface for
robotic assembly and as-built scanning on unstruc- robotic grit-blasting of complex steel bridges. Au-
tured construction sites. Automation in Construction, tomation in Construction, 27:11–23, 2012.
59:128–138, 2015.
[21] Dongmok Kim, Jongwon Kim, Kyouhee Lee, Che-
[11] Xing Su and Hubo Cai. Enabling construction 4d olgyu Park, Jinsuk Song, and Deuksoo Kang. Ex-
topological analysis for effective construction plan- cavator tele-operation system using a human arm.
ning. Journal of Computing in Civil Engineering, 30 Automation in construction, 18(2):173–182, 2009.
(1):04014123, 2016.
[22] Ying-Hao Yu, Chun-Hsien Yeh, Tsu-Tian Lee, Pei-
[12] Lichao Xu, Chen Feng, Vineet R Kamat, and Carol C Yin Chen, and Yeu-Horng Shiau. Chip-based real-
Menassa. An occupancy grid mapping enhanced vi- time gesture tracking for construction robot’s guid-
sual slam for real-time locating applications in indoor ance. In ISARC. Proceedings of the International
gps-denied environments. Automation in Construc- Symposium on Automation and Robotics in Con-
tion, 104:230–245, 2019. struction, volume 31, page 1. IAARC Publications,
2014.
[13] Seungyeol Lee and Jeon Il Moon. Introduction of
human-robot cooperation technology at construction [23] Sangseok You, Jeong-Hwan Kim, SangHyun Lee,
sites. In ISARC. Proceedings of the International Vineet Kamat, and Lionel P Robert Jr. Enhanc-
Symposium on Automation and Robotics in Con- ing perceived safety in human–robot collaborative
struction, volume 31, page 1. IAARC Publications, construction using immersive virtual environments.
2014. Automation in Construction, 96:161–170, 2018.

1534
37𝑡 ℎ International Symposium on Automation and Robotics in Construction (ISARC 2020)

[24] Pedro Tavares, Carlos M Costa, Luís Rocha, Pedro [36] Ruben Smits, H Bruyninckx, and E Aertbeliën. Kdl:
Malaca, Pedro Costa, António P Moreira, Armando Kinematics and dynamics library, 2011.
Sousa, and Germano Veiga. Collaborative welding
system using bim for robotic reprogramming and [37] Sachin Chitta, Eitan Marder-Eppstein, Wim
spatial augmented reality. Automation in Construc- Meeussen, Vijay Pradeep, Adolfo Ro-
tion, 106:102825, 2019. dríguez Tsouroukdissian, Jonathan Bohren,
David Coleman, Bence Magyar, Gennaro Raiola,
[25] Nathan Koenig and Andrew Howard. Design and use Mathias Lüdtke, and Enrique Fernández Perdomo.
paradigms for gazebo, an open-source multi-robot ros_control: A generic and simple control frame-
simulator. In 2004 IEEE/RSJ International Confer- work for ros. The Journal of Open Source Software,
ence on Intelligent Robots and Systems (IROS)(IEEE 2017.
Cat. No. 04CH37566), volume 3, pages 2149–2154.
IEEE, 2004.
[26] Jing Du, Zhengbo Zou, Yangming Shi, and Dong
Zhao. Zero latency: Real-time synchronization of
bim data in virtual reality for collaborative decision-
making. Automation in Construction, 85:51–64,
2018.
[27] Qian Wang, Jingjing Guo, and Min-Koo Kim. An
application oriented scan-to-bim framework. Remote
sensing, 11(3):365, 2019.
[28] kuka - ros wiki. On-line: http://wiki.ros.org
/kuka, Accessed: 06/01/2020.
[29] ros-sharp. On-line: https://github.com/sieme
ns/ros-sharp, Accessed: 06/01/2020.
[30] Morgan Quigley, Ken Conley, Brian Gerkey, Josh
Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, and
Andrew Y Ng. Ros: an open-source robot operating
system. In ICRA workshop on open source software,
volume 3, page 5. Kobe, Japan, 2009.
[31] gazebo_ros_pkgs - ros wiki. On-line: http:
//wiki.ros.org/gazebo_ros_pkgs, Accessed:
06/01/2020.
[32] Sachin Chitta, Ioan Sucan, and Steve Cousins.
Moveit![ros topics]. IEEE Robotics & Automation
Magazine, 19(1):18–19, 2012.
[33] A Hornung, KM Wurm, M Bennewitz, C Stachniss,
and W Burgard. An efficient probabilistic 3d map-
ping framework based on octrees armin hornung.
Autonomous Robots Journal. Springer, 2013.
[34] Ioan A Sucan, Mark Moll, and Lydia E Kavraki.
The open motion planning library. IEEE Robotics &
Automation Magazine, 19(4):72–82, 2012.
[35] Jia Pan, Sachin Chitta, and Dinesh Manocha. Fcl: A
general purpose library for collision and proximity
queries. In 2012 IEEE International Conference on
Robotics and Automation, pages 3859–3866. IEEE,
2012.

1535

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy