Methods for Imitation Learning and Human Intention Inference: Towards Seamless Human-Robot Collaboration
Digital Document
Handle |
Handle
http://hdl.handle.net/11134/20002:860653496
|
||||||
---|---|---|---|---|---|---|---|
Persons |
Persons
Creator (cre): Ravichandar, Harish
Major Advisor (mja): Dani, Ashwin
Associate Advisor (asa): Pattipati, Krishna
Associate Advisor (asa): Zhang, Liang
|
||||||
Title |
Title
Title
Methods for Imitation Learning and Human Intention Inference: Towards Seamless Human-Robot Collaboration
|
||||||
Origin Information |
Origin Information
|
||||||
Parent Item |
Parent Item
|
||||||
Resource Type |
Resource Type
|
||||||
Digital Origin |
Digital Origin
born digital
|
||||||
Description |
Description
Robots are becoming integral parts of our environments, from factory floors to hospitals, and all the way to our homes. Unlike robots enclosed in cages performing repetitive tasks with high precision, there is an ever-increasing need for robots that can seamlessly interact and collaborate with humans in close proximity. Hence, it is imperative that robots are provided with the tools necessary to both safely and efficiently collaborate with their human partners. For achieving safe and efficiently human-robot collaboration, methods to infer humans intentions and methods for quick robot programming are required. To this end, this dissertation presents methods that fall into two categories. The first category consists of methods to infer humans intentions (modeled as goal locations of reaching motions) from noisy observations of humans motion. First, a maximum likelihood estimator for the early prediction of reaching goal location is presented. Second, a maximum a-posteriori estimator, that uses information about the human eye gaze to construct the prior distribution, is presented. The second category consists of imitation learning methods to learn movement primitives from demonstrations. These methods are particularly useful to teach robots new tasks by showing of examples. In the proposed methods, movement primitives are represented as statistical dynamical systems and the corresponding parameters are learned under constraints developed based on contraction analysis. Enforcement of these constraints provides theoretical guarantees on the learned model, such as convergence to the desired end-effector position and orientation, and robustness to sudden perturbations and target changes. The methods presented in this dissertation are rigorously tested using experiments conducted by observing human motion and using a 7 DOF dual-arm Baxter robot.
|
||||||
Genre |
Genre
|
||||||
Organizations |
Organizations
Degree granting institution (dgg): University of Connecticut
|
||||||
Held By | |||||||
Rights Statement |
Rights Statement
|
||||||
Use and Reproduction |
Use and Reproduction
These materials are provided for educational and research purposes only.
|
||||||
Local Identifier |
Local Identifier
OC_d_1747
|