Learning Methods for Image-based State Estimation and Control
Digital Document
Document
Persons |
Persons
Creator (cre): Rotithor, Ghananeel
Major Advisor (mja): Dani, Ashwin P.
Associate Advisor (asa): Pattipati, Krishna R.
Associate Advisor (asa): Dutta, Abhishek
|
||||||||
---|---|---|---|---|---|---|---|---|---|
Title |
Title
Title
Learning Methods for Image-based State Estimation and Control
|
||||||||
Origin Information |
Origin Information
|
||||||||
Parent Item |
Parent Item
|
||||||||
Resource Type |
Resource Type
|
||||||||
Description |
Description
Many modern-day robotics and autonomous systems use different types of camera
sensors to estimate the state information, compute control laws and make decisions. Designing estimation and control algorithms using image data has significant challenges when the camera motions are non-informative for state estimation, there is intermittent visibility of the environment, and due to nonlinear dynamics and measurement models. This dissertation presents learning-based estimation and control algorithms using image data. The estimation algorithms aid the robot in understanding its environment, and one can use this understanding to design controllers that make the robot perform desired tasks. An observer is presented for the problem of depth estimation of image feature points from uninformative end-effector mounted camera motions. To update the observer estimates, the designed observer keeps a memory of the previous feature point locations, camera velocities, and acceleration. The estimates generated using the observer remain bounded in a prescribed compact set. These depth estimates can be used to design image-based visual servo controllers, which generate robot end-effector velocities or accelerations such that the feature points move to desired locations in the image. However, the features might leave the camera’s field of view (FOV) intermittently, resulting in a loss of feedback for computing the controller. To this end, a switched visual servo controller is proposed, which switches to a learned model termed ‘dynamic movement primitive’ when the image features are not in the FOV. In many situations, extracting meaningful feature points and deriving exact nonlinear and hybrid dynamical models for system states captured by sequential image data can be challenging. An end-to-end supervised deep learning-based filtering method is proposed for the state estimation directly from a sequence of images. An application of the deep learning-based state estimation method to deformable object shape estimation, such as cloth, is shown. A deep Kalman filter with stretching constraints for the image-based shape estimation of cloth is developed when it is in motion. The method is tested on the deformable cloth generated using a physics simulator called Blender. |
||||||||
Language |
Language
|
||||||||
Genre |
Genre
|
||||||||
Organizations |
Organizations
Degree granting institution (dgg): University of Connecticut
|
||||||||
Held By | |||||||||
Rights Statement |
Rights Statement
|
||||||||
Use and Reproduction |
Use and Reproduction
These Materials are provided for educational and research purposes only.
|
||||||||
Degree Name |
Degree Name
Doctor of Philosophy
|
||||||||
Degree Level |
Degree Level
Ph.D.
|
||||||||
Degree Discipline |
Degree Discipline
Electrical Engineering
|
||||||||
Local Identifier |
Local Identifier
S_36433892
|