Difference between revisions of "Robotics"

From Robotics
Jump to: navigation, search
m (Burdick Research Group: Robotics & BioEngineering)
m (Preference Based Learning for Exoskeleton Personalization)
 
(20 intermediate revisions by the same user not shown)
Line 18: Line 18:
 
|}
 
|}
  
== Current Research Topics ==
+
== Current and Recent Research Topics ==
  
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
|
 
|
=== Robotic Manipulation/DARPA ARMS ===
+
=== DARPA Subterranean Challenge ===
We collaborate with the Jet Propulsion Laboratory as one of the 6 teams in the DARPA-ARMS (DARPA
+
We are part of Team CoSTAR (lead by NASA/Jet Propulsion Laboratory, with partners MIT, KAIST, LTU), competing in the Subterranean Challenge (www.subtchallenge.com).  See [https://costar.jpl.nasa.gov/ the Team's web site] for the latest information.
Autonomous Robotic Manipulation--Software) competitionAs part of its contribution to the overall
+
team effort, Caltech is working on:
+
  
* '''Estimation:''' Our goal is to fuse various visual modalities (stereo, LADAR, appearance) with
 
force-torque sensing, tactile sensing, and proprioception to better estimate the locations of the
 
objects to be manipulated as well as the posture of the arm.
 
 
* '''Grasp Planning:'''  We are trying to extend the basic theory of ''caging'' manipulation to 3-dimensional objects.
 
* '''Task Decomposition:'''  We are investigating the use of formal systems theory (e.g., the use of correct-by-construction control synthesis based on Linear Temporal Logic system specification and model checking) to construct correct-by-design automata for complex manipulation tasks.
 
 
|
 
|
[[Image:Robot_Drill.png| 200px]]
+
[[Image:UrbanCircuitTeam.png| 400px]]
 
|}
 
|}
  
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
| [[Image:DragonRunner.png|150px]]
+
| [[Image:SQUID1CAD.png|400px]]
| [[Image:RCTAsymbol.jpg|300px]]
+
| [[Image:SQUID2collage.png|200px]]
 
|
 
|
=== Robotic Field Manipulation/RCTA ===
 
  
We are part of the RCTA (Robotics Collaborative Technology Aliance) program, which is sponsored by
+
=== SQUID: Self-Quick-Unfolding Investigative Drone ===
the Army Research Labs (ARL), and lead by General Dynamics Robotics Systmes (GDRS)One of the main
+
A SQUID drone can be launched in ballistically from a cannon or tube, unfold in mid-flight, and stabilize itselfTo the left you can see a diagram of '''SQUID I''' and photographs of '''SQUID 2''' in the folded and unfolded states.
objectives of this program is to develop the capabilities for mobile robots to carry out complex
+
operations in unstructured field environments.  In collaboration with the Jet Propulsion Laboratory,
+
we are developing novel grasp planning algorithms for low-degree-of-freedom grippers, as well as
+
techniques to estimate the state of the grasped object and the manipulator system.
+
 
|}
 
|}
  
Line 54: Line 42:
 
|- valign=top
 
|- valign=top
 
|
 
|
 +
 +
=== Preference Based Learning for Exoskeleton Personalization ===
 +
 +
In preference based learning, only a human subject's relative preference between two different settings is available for learning feedback.  In collaboration with Prof. [http://www.yisongyue.com/ Yisong Yue] we have been developing techniques for preference learning in both bandit and RL settings.  With The Ames Group, we have applied these preference learning techniques to the problem of learning and optimizing the parameters of exoskeleton gaits so that user comfort is optimized.
 +
 +
|
 +
[[Image:Exo.png|120px]]
 +
|}
 +
 +
{| width=100% border=1 cellpadding=2 cellspacing=2
 +
|- valign=top
 +
|
 +
 
=== ''Axel'' and ''DuAxel'' Rovers for extreme planetary terrains ===
 
=== ''Axel'' and ''DuAxel'' Rovers for extreme planetary terrains ===
  
Line 69: Line 70:
 
Axel and DuAxel to be viable concepts for future scientific missions to extreme terrains.
 
Axel and DuAxel to be viable concepts for future scientific missions to extreme terrains.
 
|
 
|
[[Image:DuAxel_Labels.jpg|400px]]
+
[[Image:DuAxel.png|400px]]
 
|}
 
|}
  
Line 77: Line 78:
 
[[Image:SteppingRobot.jpg| 260px]]
 
[[Image:SteppingRobot.jpg| 260px]]
 
|
 
|
=== Locomotion Rehabilitation After Severe Spinal Cord Injury ===
 
  
Approximately 250,000 people in the U.S. suffer from a major Spinal Cord Injury (SCI), and ~11,000
+
=== Locomotion Rehabilitation After Severe Spinal Cord Injury ===
new people will be afflicted each year.  Our lab collaborates with Prof. Reggie Edgerton at UCLA,
+
Prof. Susan Harkema at Univ. of Louisville, and Prof. Y.C. Tai here at Caltech to develop new
+
therapies and new technologies that hopefully one day will enable patients suffering from SCI to
+
partially or fully recover the ability to walk.  Currently, we focus on these topics:
+
  
* Novel ''high density epidural spinal stimulating electrode arrays'' for locomotion recovery.  For more on
+
More than 250,000 people in the U.S. suffer from a major Spinal Cord Injury (SCI), and over 11,000
this topic, [http://robotics.caltech.edu/~NIH_BRP/index.php/Main_Page see this link].
+
new people will be afflicted each year. Our lab collaborates with Prof. Reggie Edgerton at UCLA
* New robotic mechanisms for active rehabilitation of SCI in animal (mice and rat) models.
+
to develop new therapies and new technologies that hopefully one day will enable patients suffering
* New adaptive training algorithms to optimize the rehabilitation afforded by robotic devices.
+
from SCI to partially or fully recover the ability to walk. Currently, we focus on these topics:
* Drug therapies to improve locomotion recovery.
+
 
|}
 
|}
  
Line 95: Line 90:
 
|- valign=top
 
|- valign=top
 
|
 
|
 +
 
=== Recent Papers ===
 
=== Recent Papers ===
'''Planning in Uncertain Environments'''
 
* Noel du Toit and Joel W. Burdick, [[media:DuToitTRO12.pdf | Robotic Motion Planning in Dynamic, Uncertain Environments]], \'\'IEEE Trans. Robotics,\'\' vol 28, no. 1, Feb. 2012, pp. 101-115.
 
 
* T.H. Chung and J.W. Burdick, [[media:ChungBurdickTRO12.pdf | Analysis of Search Decision Making Using
 
Probabilistic Search Strategies]], ''IEEE Trans. Robotics,'' vol. 28, no.1, Feb. 2012, pp. 132-144.
 
 
* Scott C. Livingston, Richard M. Murray, and Joel W. Burdick, [[media:LivingstonICRA12.pdf | Backtracking
 
Temporal Logic Synthesis for Uncertain Environments]], ''Proc. IEEE Int. Conf. Robotics and Automation,''
 
May 2012, Minneapolis, MN.
 
 
'''Dextrous Manipulation'''
 
* Paul Hebert, Nicolas Hudson, Jeremy Ma, Thomas Howard, Thomas Fuchs, Max Bajracharya, Joel Burdick,
 
[[media:HebertICRA12.pdf | Combined Shape, Appearance, and Silhouette for Simultaneous Manipulator and
 
Object Tracking]],''Proc. IEEE Int. Conf. Robotics and Automation,'' May 2012, Minneapolis, MN.
 
 
* Thomas Allen, Joel Burdick, and E. Rimon, [[media:CagingICRA12.pdf | Two-Fingered Caging of Polygons via Contact-Space Graph Search]], \'\'Proc. IEEE Int. Conf. Robotics and Automation,\'\' May 2012, Minneapolis, MN.
 
 
* Sandeep Chinchali, Scott C. Livingston, Ufuk Topcu, Joel. W. Burdick, and Richard M. Murray, [[media:GaitLTL.pdf | Towards Formal Synthesis of Reactive Controllers for Dexterous Robotic Manipulation]], \'\'Proc. IEEE Int. Conf. Robotics and Automation,\'\' May 2012, Minneapolis, MN.
 
 
* Matanya Horowitz and Joel W. Burdick, [[media:MatanyaICRA12.pdf | Combined Grasp and Manipulation Planning as a Trajectory Optimization Problem]], \'\'Proc. IEEE Int. Conf. Robotics and Automation,\'\' May 2012, Minneapolis, MN.
 
 
* N. Hudson, T. Howard, J. Ma, A. Jain, M. Bajracharya, C. Kuo, L. Matthies, P. Backes, P. Hebert, T. Fuchs, J. Burdick, [[media:DARPAICRA12.pdf | End-to-End Dextrous Manipulation with Deliberate Interactive Estimation]], \'\'Proc. IEEE Int. Conf. Robotics and Automation,\'\' May 2012, Minneapolis, MN.\n\n\'\'\'Axel/Mobility\'\'\'
 
 
* I.A.D. Nesnas, J.B. Matthews, P. Abad-Manterola, J.W. Burdick, J.A. Edlund, R.D. Peters, M.M. Tanner, R.N. Miyake, B.S. Solish, and R.C. Anderson, [[media:AxelJOFR.pdf | Axel and DuAxel Rovers for the Sustainable Exploration of Extreme Terrains]], \'\'J. Field Robotics,\'\' Feb., 2012.
 
 
* P. Varkonyi, David Gontier, and J.W. Burdick, [[media:BipedICRA12.pdf | On the Lyapunov stability of quasistatic planar biped robots]], \'\'Proc. IEEE Int. Conf. Robotics and Automation,\'\' May 2012, Minneapolis, MN.
 
 
'''Spinal Cord Injury (SCI) Rehabilitation'''
 
 
* \'\'Lancet Paper\'\'\n|}
 
  
== Recent Research Topics ==
+
== Past Research Topics ==
  
 
Here are a some recent research topics that were actively pursued in our group.
 
Here are a some recent research topics that were actively pursued in our group.
Line 133: Line 99:
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
|
 
|
=== Human Detection & Tracking Using UWB Radar ===   
+
=== DARPA Autonomous Robot Manipulation Software (ARMS) ===   
  
While Ultra-Wide-Band (UWB) Radar has existed for decades, it has been more actively investigated in recent years as both an alternative wireless communication technology, as well as a biometric sensor because of its excellent ranging resolution, low power, and sensitivity to human motion.  In collaboration with Prof. Hossein Hashimi at USC, we are investigating the use of UWB radar to detect and track human motion for safety and security applications. 
 
 
|
 
|
[[Image:OnePageOverview.jpg | 300px]]
+
[[Image:DARPA_TwoArm.png | 150px]]
 
|}
 
|}
  
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
|
 
[[Image:Worm03animated_web.gif | Nematode]]
 
|
 
=== Animal Tracking and Activity Recognition ===
 
We are interested in developing methods to automatically identify and classify "activities" in data
 
streams, such as video sequences. A practical application and motivation for this work is automated
 
tracking and recognition of biological organism behavior in controlled laboratory environments. 
 
|}
 
  
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
|
 
|
=== Sensor-Based Motion Planning and Sensor Processing ===
+
[[Image:RoboProbe.png | 165px]]
 
+
''Sensor Based Planning'' incorporates sensor information, reflecting the current state of the
+
environment, into a robot\'s planning process, as opposed to classical planning , where full
+
knowledge of the world\'s geometry is assumed to be known prior to the planning event. Current and
+
recent interests of our group include
+
 
+
* Motion planning in Cluttered, Dynamic, and Uncertain environments
+
* Sensor-based motion planning algorithms
+
 
|
 
|
[[Image:Rocky7.gif| 240px ]]
 
|}
 
  
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 
|
 
[[Image:MovableProbe1.gif]]
 
|
 
 
=== Neural Prosthetics and Brain-Machine Interfaces ===   
 
=== Neural Prosthetics and Brain-Machine Interfaces ===   
 
A neural prosthesis is a ''direct brain interface'' that enables a human, via the use of surgically  
 
A neural prosthesis is a ''direct brain interface'' that enables a human, via the use of surgically  

Latest revision as of 03:03, 12 January 2021


The Burdick Group Wiki Home Page
Robotics and Spinal Cord Therapy



Burdick Research Group: Robotics & BioEngineering

  • Our research group pursues both Robotics and BioEngineering related to spinal cord injury. Below you can find summaries of our current research efforts, links to recent papers, and summaries of past research efforts.

Current and Recent Research Topics

DARPA Subterranean Challenge

We are part of Team CoSTAR (lead by NASA/Jet Propulsion Laboratory, with partners MIT, KAIST, LTU), competing in the Subterranean Challenge (www.subtchallenge.com). See the Team's web site for the latest information.

UrbanCircuitTeam.png

SQUID1CAD.png SQUID2collage.png

SQUID: Self-Quick-Unfolding Investigative Drone

A SQUID drone can be launched in ballistically from a cannon or tube, unfold in mid-flight, and stabilize itself. To the left you can see a diagram of SQUID I and photographs of SQUID 2 in the folded and unfolded states.

Preference Based Learning for Exoskeleton Personalization

In preference based learning, only a human subject's relative preference between two different settings is available for learning feedback. In collaboration with Prof. Yisong Yue we have been developing techniques for preference learning in both bandit and RL settings. With The Ames Group, we have applied these preference learning techniques to the problem of learning and optimizing the parameters of exoskeleton gaits so that user comfort is optimized.

Exo.png

Axel and DuAxel Rovers for extreme planetary terrains

Conventional robotic Martian explorers, such as Sojourner, Spirit, and Opportunity, have sufficient mobility to access ~60% of the Martian surface. However, some of the most interesting science targets occur in the currently inaccessible extreme terrains, such as steep craters, overhangs, loose soil, and layered stratigraphy. Access to extreme terrains on other planets (besides Mars) and moons is also of potential interest. In collaboration with JPL, we are developing the Axel and DuAxel rovers. Axel is a minimalist tethered robot that can ascend and descend vertical and steeps slopes, as well as navigate over large (relative to the body size) obstacles. In the DuAxel configuration, two Axels dock with a central module to form a self-contained 4-wheeled rover, which can then disassemble as needed to allow one or both Axels to descend into extreme terrain. The goal of this work is to develop and demonstrate the motion planning, novel mobility mechanisms, mobility analysis, and steep terrain sampling technologies that would allow Axel and DuAxel to be viable concepts for future scientific missions to extreme terrains.

DuAxel.png

260px

Locomotion Rehabilitation After Severe Spinal Cord Injury

More than 250,000 people in the U.S. suffer from a major Spinal Cord Injury (SCI), and over 11,000 new people will be afflicted each year. Our lab collaborates with Prof. Reggie Edgerton at UCLA to develop new therapies and new technologies that hopefully one day will enable patients suffering from SCI to partially or fully recover the ability to walk. Currently, we focus on these topics:

Recent Papers

Past Research Topics

Here are a some recent research topics that were actively pursued in our group.

DARPA Autonomous Robot Manipulation Software (ARMS)

DARPA TwoArm.png


RoboProbe.png

Neural Prosthetics and Brain-Machine Interfaces

A neural prosthesis is a direct brain interface that enables a human, via the use of surgically implanted electrode arrays and associated computer decoding algorithms, to control external electromechanical devices by pure thought alone. In this manner, some useful motor functions that have been lost through disease or accident can be partially restored. Our lab collaborates with the laboratories of Prof. Richard Andersen and Prof. Y.C. Tai to develop neural prostheses and brain-machine interfaces. Our group focuses on these particular issues:

  • Autonomously Positioned (robotic) Neural Recording Electrodes. To optimize the quality of

the neural signal recorded by an extracellular electrode, the active recording site must be positioned very close (at least within 30 microns, and preferably a few microns from the soma) to the neural cell body. However, due to blood pressure variations, breathing, and mechanical shocks, the electrode-soma geometry varies significantly over time. We have developed algorithms which allow an actuated electrode to autonomously reposition itself in real time to maintain high quality neural recordings.

  • Neural decoding algorithms. A decoding algorithm attempts to decode, or decipher, the

intent of a paralyzed neural prosthetic user from the recorded electrode signals. Neural decoding has become a well developed subject. We have chosen to explore the concept of a supervisory decoder whose aim is to estimate the current cognitive and planning state of the prosthetic user. E.g., is the user awake? Do they want to use the prosthetic? Are they currently in the planning process? Do they want to execute the plan? Do the want to change or scrub the current prosthetic action? We have chosen to formulate the design of a supervisory decoder as a problem in hybrid system identification.