SuperBaxter: Next Generation Collaborative Robot for Manufacturing
Super Baxter is a collaboration between the NSF RoSe-HUB Center, Rethink Robotics, and Barrett Technology to
develop a next-generation collaborative robot with programming by demonstration, human activity recognition,
human facial emotion recognition and display, human gesture recognition and display, and natural language
understanding and generation that can engage in verbal and non-verbal dialog with humans for manufacturing and
assistive applications.
The output of the GBP system is the executable program for performing the demonstrated task on the target hardware. This
program consists of a network of encapsulated expertise agents of two flavors. The primary agents implement the primitives
required to perform the task and come from the pool of primitives represented in the skill base. The secondary set of agents
includes many of the same gesture recognition and interpretation agents used during the demonstration. These agents perform
on-line observation of the human to allow supervised practicing of the task for further adaptation.
Link to Demo
Gesture-Based Programming
Gesture-Based Programming (GBP) is a form of programming by human
demonstration based on an expansive definition of gestures beyond the traditional hand motions.
The process begins by observing a human demonstrate the task to be programmed.
Observation of the human's hand and fingertips is achieved through a sensorized glove with
special tactile fingertips. The modular glove system senses hand pose, finger joint angles, and fingertip contact conditions. Objects in the environment are sensed with computer vision while a speech recognition system extracts "articulatory gestures."
Primitive gesture classes are extracted from the raw sensor information and passed on to a
gesture interpretation network. The agents in this network extract the demonstrator's intentions based upon the knowledge they
have previously stored in the system's skill library from prior demonstrations. Like a self-aware human trainee, the system is
able to generate an abstraction of the demonstrated task, mapped onto its own skills. In other words, the system is not merely
remembering everything the human does, but is trying to understand -- within its scope of expertise -- the subtasks the human
is performing ("gesturing"). These primitive capabilities in the skill base take the form of encapsulated expertise agents -- semi-autonomous agents that encode sensorimotor primitives and low-level skills for later execution.Demos
RoSe-HUB Publications
Prior Publications
Additional Support
This work is supported by the NSF Center for Robots and Sensors for the Human Well-Being through CNS-1439717 with
additional support from an NSF MRI grant, CNS-1427872.
Copyright: © 2017, 2018 by Richard Voyles