Difference between revisions of "Robotics"

From Robotics
Jump to: navigation, search
m
m (Preference Based Learning for Exoskeleton Personalization)
 
(39 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
__NOTOC__  
 
__NOTOC__  
  
<div style="text-align: center;"> <p style="font-size:x-large"> The Burdick Group Wiki Home Page <br> Robotics and Spinal Cord Therapy</div>
+
<div style="text-align:center;"> <p style="font-size:x-large"> The Burdick Group Wiki Home Page <br> Robotics and Spinal Cord Therapy</p></div>
 
<br>
 
<br>
 
<br>
 
<br>
  
[[People | People in the Robotics Group]]
+
{| width=100% border=1 cellpadding=2 cellspacing=2
 +
|- valign=top
 +
| rowspan=2 |
 +
== Burdick Research Group: Robotics & BioEngineering ==
 +
* We are housed in [http://www.mce.caltech.edu/ Mechanical & Civil Engineering], [http://www.eas.caltech.edu/ Division of Engineering & Applied Science], [http://www.caltech.edu/ California Institute of Technology]
 +
 
 +
* Our research group pursues both Robotics and BioEngineering related to spinal cord injury.  Below you can find summaries of our current research efforts, links to recent papers, and summaries of past research efforts.
 +
 
 +
* [[JoelBurdick | Prof. Joel Burdick's Homepage]]
 +
 
 +
* [[People | People in the Burdick Group]]
 +
|}
 +
 
 +
== Current and Recent Research Topics ==
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 +
|
 +
=== DARPA Subterranean Challenge ===
 +
We are part of Team CoSTAR (lead by NASA/Jet Propulsion Laboratory, with partners MIT, KAIST, LTU), competing in the Subterranean Challenge (www.subtchallenge.com).  See [https://costar.jpl.nasa.gov/ the Team's web site] for the latest information.
 +
 
 +
|
 +
[[Image:UrbanCircuitTeam.png| 400px]]
 +
|}
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 +
| [[Image:SQUID1CAD.png|400px]]
 +
| [[Image:SQUID2collage.png|200px]]
 +
|
 +
 
 +
=== SQUID: Self-Quick-Unfolding Investigative Drone ===
 +
A SQUID drone can be launched in ballistically from a cannon or tube, unfold in mid-flight, and stabilize itself.  To the left you can see a diagram of '''SQUID I''' and photographs of '''SQUID 2''' in the folded and unfolded states.
 +
 +
|}
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2
 +
|- valign=top
 +
|
 +
 
 +
=== Preference Based Learning for Exoskeleton Personalization ===
 +
 
 +
In preference based learning, only a human subject's relative preference between two different settings is available for learning feedback.  In collaboration with Prof. [http://www.yisongyue.com/ Yisong Yue] we have been developing techniques for preference learning in both bandit and RL settings.  With The Ames Group, we have applied these preference learning techniques to the problem of learning and optimizing the parameters of exoskeleton gaits so that user comfort is optimized.
 +
 
 +
|
 +
[[Image:Exo.png|120px]]
 +
|}
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2
 +
|- valign=top
 +
|
 +
 
 +
=== ''Axel'' and ''DuAxel'' Rovers for extreme planetary terrains ===
 +
 
 +
Conventional robotic Martian explorers, such as Sojourner, Spirit, and Opportunity, have sufficient
 +
mobility to access ~60% of the Martian surface.  However, some of the most interesting science
 +
targets occur in the currently inaccessible ''extreme terrains,'' such as steep craters, overhangs,
 +
loose soil, and layered stratigraphy.  Access to extreme terrains on other planets (besides Mars)
 +
and moons is also of potential interest.  In collaboration with JPL, we are developing the ''Axel''
 +
and ''DuAxel'' rovers.  ''Axel'' is a minimalist tethered robot that can ascend and descend vertical
 +
and steeps slopes, as well as navigate over large (relative to the body size) obstacles.  In the
 +
''DuAxel'' configuration, two Axels dock with a ''central module'' to form a self-contained
 +
4-wheeled rover, which can then disassemble as needed to allow one or both Axels to descend into
 +
extreme terrain. The goal of this work is to develop and demonstrate the motion planning, novel
 +
mobility mechanisms, mobility analysis, and steep terrain sampling technologies that would allow
 +
Axel and DuAxel to be viable concepts for future scientific missions to extreme terrains.
 +
|
 +
[[Image:DuAxel.png|400px]]
 +
|}
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2
 +
|- valign=top
 +
|
 +
[[Image:SteppingRobot.jpg| 260px]]
 +
|
 +
 
 +
=== Locomotion Rehabilitation After Severe Spinal Cord Injury ===
 +
 
 +
More than 250,000 people in the U.S. suffer from a major Spinal Cord Injury (SCI), and over 11,000
 +
new people will be afflicted each year.  Our lab collaborates with Prof. Reggie Edgerton at UCLA
 +
to develop new therapies and new technologies that hopefully one day will enable patients suffering
 +
from SCI to partially or fully recover the ability to walk.  Currently, we focus on these topics:
 +
|}
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2
 +
|- valign=top
 +
|
 +
 
 +
=== Recent Papers ===
 +
 
 +
== Past Research Topics ==
 +
 
 +
Here are a some recent research topics that were actively pursued in our group.
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 +
|
 +
=== DARPA Autonomous Robot Manipulation Software (ARMS) === 
 +
 
 +
|
 +
[[Image:DARPA_TwoArm.png | 150px]]
 +
|}
 +
 
 +
 
 +
{| width=100% border=1 cellpadding=2 cellspacing=2 valign=top
 +
|
 +
[[Image:RoboProbe.png | 165px]]
 +
|
 +
 
 +
=== Neural Prosthetics and Brain-Machine Interfaces === 
 +
A neural prosthesis is a ''direct brain interface'' that enables a human, via the use of surgically
 +
implanted electrode arrays and associated computer decoding algorithms, to control external
 +
electromechanical devices by pure thought alone.  In this manner, some useful motor functions that
 +
have been lost through disease or accident can be partially restored.  Our lab collaborates with the
 +
laboratories of Prof. Richard Andersen and Prof. Y.C. Tai to develop neural prostheses and brain-machine
 +
interfaces.  Our group focuses on these particular issues:
 +
 
 +
* '''Autonomously Positioned (robotic) Neural Recording Electrodes.''' To optimize the quality of
 +
the neural signal recorded by an extracellular electrode, the active recording site must be
 +
positioned very close (at least within 30 microns, and preferably a few microns from the soma) to
 +
the neural cell body.  However, due to blood pressure variations, breathing, and mechanical shocks,
 +
the electrode-soma geometry varies significantly over time.  We have developed algorithms which
 +
allow an actuated electrode to autonomously reposition itself in real time to maintain high quality
 +
neural recordings.
 +
 
 +
* '''Neural decoding algorithms.''' A ''decoding'' algorithm attempts to decode, or decipher, the
 +
intent of a paralyzed neural prosthetic user from the recorded electrode signals.  Neural decoding
 +
has become a well developed subject.  We have chosen to explore the concept of a ''supervisory
 +
decoder'' whose aim is to estimate the current cognitive and planning state of the prosthetic user.
 +
E.g., is the user awake? Do they want to use the prosthetic?  Are they currently in the planning
 +
process?  Do they want to execute the plan?  Do the want to change or scrub the current prosthetic
 +
action?  We have chosen to formulate the design of a supervisory decoder as a problem in hybrid
 +
system identification.
 +
|}

Latest revision as of 03:03, 12 January 2021


The Burdick Group Wiki Home Page
Robotics and Spinal Cord Therapy



Burdick Research Group: Robotics & BioEngineering

  • Our research group pursues both Robotics and BioEngineering related to spinal cord injury. Below you can find summaries of our current research efforts, links to recent papers, and summaries of past research efforts.

Current and Recent Research Topics

DARPA Subterranean Challenge

We are part of Team CoSTAR (lead by NASA/Jet Propulsion Laboratory, with partners MIT, KAIST, LTU), competing in the Subterranean Challenge (www.subtchallenge.com). See the Team's web site for the latest information.

UrbanCircuitTeam.png

SQUID1CAD.png SQUID2collage.png

SQUID: Self-Quick-Unfolding Investigative Drone

A SQUID drone can be launched in ballistically from a cannon or tube, unfold in mid-flight, and stabilize itself. To the left you can see a diagram of SQUID I and photographs of SQUID 2 in the folded and unfolded states.

Preference Based Learning for Exoskeleton Personalization

In preference based learning, only a human subject's relative preference between two different settings is available for learning feedback. In collaboration with Prof. Yisong Yue we have been developing techniques for preference learning in both bandit and RL settings. With The Ames Group, we have applied these preference learning techniques to the problem of learning and optimizing the parameters of exoskeleton gaits so that user comfort is optimized.

Exo.png

Axel and DuAxel Rovers for extreme planetary terrains

Conventional robotic Martian explorers, such as Sojourner, Spirit, and Opportunity, have sufficient mobility to access ~60% of the Martian surface. However, some of the most interesting science targets occur in the currently inaccessible extreme terrains, such as steep craters, overhangs, loose soil, and layered stratigraphy. Access to extreme terrains on other planets (besides Mars) and moons is also of potential interest. In collaboration with JPL, we are developing the Axel and DuAxel rovers. Axel is a minimalist tethered robot that can ascend and descend vertical and steeps slopes, as well as navigate over large (relative to the body size) obstacles. In the DuAxel configuration, two Axels dock with a central module to form a self-contained 4-wheeled rover, which can then disassemble as needed to allow one or both Axels to descend into extreme terrain. The goal of this work is to develop and demonstrate the motion planning, novel mobility mechanisms, mobility analysis, and steep terrain sampling technologies that would allow Axel and DuAxel to be viable concepts for future scientific missions to extreme terrains.

DuAxel.png

260px

Locomotion Rehabilitation After Severe Spinal Cord Injury

More than 250,000 people in the U.S. suffer from a major Spinal Cord Injury (SCI), and over 11,000 new people will be afflicted each year. Our lab collaborates with Prof. Reggie Edgerton at UCLA to develop new therapies and new technologies that hopefully one day will enable patients suffering from SCI to partially or fully recover the ability to walk. Currently, we focus on these topics:

Recent Papers

Past Research Topics

Here are a some recent research topics that were actively pursued in our group.

DARPA Autonomous Robot Manipulation Software (ARMS)

DARPA TwoArm.png


RoboProbe.png

Neural Prosthetics and Brain-Machine Interfaces

A neural prosthesis is a direct brain interface that enables a human, via the use of surgically implanted electrode arrays and associated computer decoding algorithms, to control external electromechanical devices by pure thought alone. In this manner, some useful motor functions that have been lost through disease or accident can be partially restored. Our lab collaborates with the laboratories of Prof. Richard Andersen and Prof. Y.C. Tai to develop neural prostheses and brain-machine interfaces. Our group focuses on these particular issues:

  • Autonomously Positioned (robotic) Neural Recording Electrodes. To optimize the quality of

the neural signal recorded by an extracellular electrode, the active recording site must be positioned very close (at least within 30 microns, and preferably a few microns from the soma) to the neural cell body. However, due to blood pressure variations, breathing, and mechanical shocks, the electrode-soma geometry varies significantly over time. We have developed algorithms which allow an actuated electrode to autonomously reposition itself in real time to maintain high quality neural recordings.

  • Neural decoding algorithms. A decoding algorithm attempts to decode, or decipher, the

intent of a paralyzed neural prosthetic user from the recorded electrode signals. Neural decoding has become a well developed subject. We have chosen to explore the concept of a supervisory decoder whose aim is to estimate the current cognitive and planning state of the prosthetic user. E.g., is the user awake? Do they want to use the prosthetic? Are they currently in the planning process? Do they want to execute the plan? Do the want to change or scrub the current prosthetic action? We have chosen to formulate the design of a supervisory decoder as a problem in hybrid system identification.