null
Created with Sketch. Created with Sketch.

DYNAMIXEL in Research

A curated collection of peer-reviewed research, theses, and academic projects using DYNAMIXEL smart actuators.


Olaf: Bringing an Animated Character to Life in the Physical World

Authors: David Müller, Espen Knoop, Dario Mylonopoulos, Agon Serifi, Michael A. Hopkins, Ruben Grandia, Moritz Bächer
Institution: Disney Research
Year: 2025 Field: Animatronics

Animated characters often move in non-physical ways and have proportions that are far from a typical walking robot. This provides an ideal platform for innovation in both mechanical design and stylized motion control. In this paper, we bring Olaf to life in the physical world, relying on reinforcement learning guided by animation references for control. To create the illusion of Olaf's feet moving along his body, we hide two asymmetric legs under a soft foam skirt. To fit actuators inside the character, we use spherical and planar linkages in the arms, mouth, and eyes. Because the walk cycle results in harsh contact sounds, we introduce additional rewards that noticeably reduce impact noise. The large head, driven by small actuators in the character's slim neck, creates a risk of overheating, amplified by the costume. To keep actuators from overheating, we feed temperature values as additional inputs to policies, introducing new rewards to keep them within bounds. We validate the efficacy of our modeling in simulation and on hardware, demonstrating an unmatched level of believability for a costumed robotic character. Powered By DYNAMIXEL.

View Paper


CHILD (Controller for Humanoid Imitation and Live Demonstration):
a Whole-Body Humanoid Teleoperation System

Authors: Noboru Myers, Obin Kwon, Sankalp Yamsani, and Joohyung Kim
Institution: University of Illinois Urbana-Champaign
Year: 2025 Field: Teleoperation

Abstract— Recent advances in teleoperation have demonstrated robots performing complex manipulation tasks. However, existing works rarely support whole-body joint-level teleoperation for humanoid robots, limiting the diversity of tasks that can be accomplished. This work presents Controller for Humanoid Imitation and Live Demonstration (CHILD), a compact reconfigurable teleoperation system that enables joint level control over humanoid robots. CHILD fits within a standard baby carrier, allowing the operator control over all four limbs, and supports both direct joint mapping for fullbody control and loco-manipulation. Adaptive force feedback is incorporated to enhance operator experience and prevent unsafe joint movements. We validate the capabilities of this system by conducting loco-manipulation and full-body control demonstrations on a humanoid robot and multiple dual-arm systems. Lastly, we open-source the design of the hardware promoting accessibility and reproducibility. Powered By DYNAMIXEL.

View Paper


RUKA: Rethinking the Design of Humanoid Hands with Learning

Authors: Anya Zorin, Irmak Guzey, Billy Yan, Aadhithya Iyer, Lisa Kondrich, Nikhil X. Bhattasali, Lerrel Pinto
Institution: New York University
Year: 2025 Field: Manipulation

Abstract—Dexterous manipulation is a fundamental capability for robotic systems, yet progress has been limited by hardware trade-offs between precision, compactness, strength, and affordability. Existing control methods impose compromises on hand designs and applications. However, learning-based approaches present opportunities to rethink these trade-offs, particularly to address challenges with tendon-driven actuation and low-cost materials. This work presents RUKA, a tendon-driven humanoid hand that is compact, affordable, and capable. Made from 3Dprinted parts and off-the-shelf components, RUKA has 5 fingers with 15 underactuated degrees of freedom enabling diverse human-like grasps. Its tendon-driven actuation allows powerful grasping in a compact, human-sized form factor. To address control challenges, we learn joint-to-actuator and fingertip-toactuator models from motion-capture data collected by the MANUS glove, leveraging the hand’s morphological accuracy. Extensive evaluations demonstrate RUKA’s superior reachability, durability, and strength compared to other robotic hands. Teleoperation tasks further showcase RUKA’s dexterous movements. Powered By DYNAMIXEL.

View Paper


SpeechCompass: Enhancing Mobile Captioning with Diarization
and Directional Guidance via Multi-Microphone Localization

Authors: Artem Dementyev, Dimitri Kanevsky, Samuel J. Yang, Mathieu Parvaix, Chiong Lai, Alex Olwal*  
Institution: Google Research
Year: 2025 Field: Assistive Technology

Abstract
Speech-to-text capabilities on mobile devices have proven helpful
for hearing and speech accessibility, language translation, notetaking, and meeting transcripts. However, our foundational largescale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through realtime, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a largescale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations. Powered By DYNAMIXEL.

View Paper


A Robotic Hand Surpassing Human Capabilities in Dexterity and Functionality

Authors: Xiao Gao, Kunpeng Yao, Kai Junge, Josie Hughes, Aude Gemma Billard
Institution: EPFL Year: 2025 Field: Robotic Hands

The human hand is often viewed as the pinnacle of dexterity. Yet, its asymmetric shape and unique thumb largely limit its dexterity. Artificial and robotic hands have often departed from anthropomorphic design. With two or three fingers distributed in a uniform manner, traditional industrial robotic hands aimed at preserving symmetry, and facilitating manipulation. They remain, however, extremely far from human dexterity, capable solely of picking and placing tasks, one object at a time. We present a reversible robotic hand that tackles this challenge by uniting multi-object grasping and crawling locomotion in a single device. The hand employs an identical-finger, symmetric design optimized through a multi-layer framework that combines optimization (for exploring diverse grasp configurations) with constraint-based methods. As a result, the hand can detach from the arm, crawl to retrieve multiple objects beyond normal reach, and reattach while securely holding them. This integrated approach expands conventional manipulative capabilities, enabling tasks that surpass human and traditional robotic hands in certain scenarios, such as grasping multiple items, while simultaneously walking, using a single hand in place of two for manipulating tools. By bridging the gap between stationary manipulation and autonomous mobility, our design opens new possibilities for industrial, service, and exploratory robotic applications. Powered By DYNAMIXEL.

View Paper


Flying Hand: End-Effector-Centric Framework for Versatile Aerial
Manipulation Teleoperation and Policy Learning

Authors: Guanqi He, Xiaofeng Guo, Luyi Tang, Yuanhang Zhang, Mohammadreza Mousaei, Jiahe Xu, Junyi Geng, Sebastian Scherer, Guanya Shi
Institution: Carnegie Mellon University and Pennsylvania State University
Year: 2025 Field: Aerial 

Abstract—Aerial manipulation has recently attracted increasing interest from both industry and academia. Previous approaches have demonstrated success in various specific tasks.
However, their hardware design and control frameworks are often tightly coupled with task specifications, limiting the development of cross-task and cross-platform algorithms. Inspired by the success of robot learning in tabletop manipulation, we propose a unified aerial manipulation framework with an end-effector-centric interface that decouples high-level platformagnostic decision-making from task-agnostic low-level control. Our framework consists of a fully-actuated hexarotor with a 4-DoF robotic arm, an end-effector-centric whole-body model predictive controller, and a high-level policy. The high-precision end-effector controller enables efficient and intuitive aerial teleoperation for versatile tasks and facilitates the development of imitation learning policies. Real-world experiments show that the proposed framework significantly improves end-effector tracking accuracy, and can handle multiple aerial teleoperation and imitation learning tasks, including writing, peg-in-hole, pick and place, changing light bulbs, etc. We believe the proposed framework provides one way to standardize and unify aerial manipulation into the general manipulation community and to advance the field. Powered By DYNAMIXEL.

View Paper


RoboCup Rescue 2025 Team Description Paper UruBots

Authors: Kevin Farias, Pablo Moraes, Igor Nunes, Juan Deniz, Sebastian Barcelona, Hiago Sodre, William Moraes, Monica Rodriguez, Ahilen Mazondo, Vincent Sandin, Gabriel da Silva, Victoria Saravia, Vinicio Melgar, Santiago Fernandez, Ricardo Grando
Institution: Technological University of Uruguay
Year: 2025  Field: Autonomous SAR Vehicles

Abstract—This paper describes the approach used by Team UruBots for participation in the 2025 RoboCup Rescue Robot League competition. Our team aims to participate for the first time in this competition at RoboCup, using experience learned from previous competitions and research. We present our vehicle and our approach to tackle the task of detecting and finding victims in search and rescue environments. Our approach contains known topics in robotics, such as ROS, SLAM, Human Robot Interaction and segmentation and perception. Our proposed approach is open source, available to the RoboCup Rescue community, where we aim to learn and contribute to the league. Powered By DYNAMIXEL.

View Paper


Design of a low-cost and lightweight 6 DoF bimanual arm for dynamic and contact-rich manipulation

Authors: Jaehyung Kim, Jiho Kim, Dongryung Lee, Yujin Jang, Beomjoon Kim
Institution: Graduate School of AI, KAIST and Seoultech University
Year: 2025 Field: Manipulator 

Abstract—Dynamic and contact-rich object manipulation, such as striking, snatching, or hammering, remains challenging for robotic systems due to hardware limitations. Most existing robots are constrained by high-inertia design, limited compliance, and reliance on expensive torque sensors. To address this, we introduce ARMADA (Affordable Robot for Manipulation and Dynamic Actions), a 6 degrees-of-freedom bimanual robot designed for dynamic manipulation research. ARMADA combines low-inertia, back-drivable actuators with a lightweight design, using readily available components and 3D-printed links for ease of assembly in research labs. The entire system, including both arms, is built for just $6,100. Each arm achieves speeds up to 6.16m/s, almost twice that of most collaborative robots, with a comparable payload of 2.5kg. We demonstrate ARMADA can perform dynamic manipulation like snatching, hammering, and bimanual throwing in real-world environments. We also showcase its effectiveness in reinforcement learning (RL) by training a non-prehensile manipulation policy in simulation and transferring it zero-shot to the real world, as well as human motion shadowing for dynamic bimanual object throwing. Powered By DYNAMIXEL.

View Paper


ORCA: An Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic Hand for Uninterrupted Dexterous Task Learning

Authors: Clemens C. Christoph, Maximilian Eberlein, Filippos Katsimalis, Arturo Roberti, Aristotelis Sympetheros, Michel R. Vogt, Davide Liconti, Chenyu Yang, Barnabas Gavin Cangan, Ronan J. Hinchet, Robert K. Katzschmann
Institution: ETH Zürich, Switzerland
Year: 2025 Field: Robotic Hand

Abstract— General-purpose robots should possess humanlike dexterity and agility to perform tasks with the same versatility as us. A human-like form factor further enables the use of vast datasets of human-hand interactions. However, the primary bottleneck in dexterous manipulation lies not only in software but arguably even more in hardware. Robotic hands that approach human capabilities are often prohibitively expensive, bulky, or require enterprise-level maintenance, limiting their accessibility for broader research and practical applications. What if the research community could get started with reliable dexterous hands within a day? We present the open-source ORCA hand, a reliable and anthropomorphic 17- DoF tendon-driven robotic hand with integrated tactile sensors, fully assembled in less than eight hours and built for a material cost below 2,000 CHF. We showcase ORCA’s key design features such as popping joints, auto-calibration, and tensioning systems that significantly reduce complexity while increasing reliability, accuracy, and robustness. We benchmark the ORCA hand across a variety of tasks, ranging from teleoperation and imitation learning to zero-shot sim-to-real reinforcement learning. Furthermore, we demonstrate its durability, withstanding more than 10,000 continuous operation cycles–equivalent to approximately 20 hours–without hardware failure, the only constraint being the duration of the experiment itself. Powered By DYNAMIXEL.

View Paper


Friction-Scaled Vibrotactile Feedback for Real-Time Slip Detection in Manipulation using Robotic Sixth Finger

Authors: Naqash Afzal, Basma Hasanen, Lakmal Seneviratne, Oussama Khatib, Irfan Hussain
Institution: Khalifa University and Stanford Robotics Laboratory
Year: 2025 Field: Haptic Feedback, Wearable Robotics

Abstract

The integration of extra-robotic limbs/fingers to enhance and expand motor skills, particularly for grasping and manipulation, possesses significant challenges. The grasping performance of existing limbs/fingers is far inferior to that of human hands. Human hands can detect the onset of slip through tactile feedback originating from tactile receptors during the grasping process, enabling precise and automatic regulation of grip force. This grip force is scaled by the coefficient of friction between the contacting surface and the fingers. The frictional information is perceived by humans depending upon the slip happening between the finger and the object. This ability to perceive friction allows humans to apply just the right amount of force needed to maintain a secure grip, adjusting based on the weight of the object and the friction of the contact surface. Enhancing this capability in extra-robotic limbs or fingers used by humans is challenging. To address this challenge, this paper introduces a novel approach to communicate frictional information to users through encoded vibrotactile cues. These cues are conveyed on the onset of incipient slip thus allowing the users to perceive the friction and ultimately use this information to increase the force to avoid dropping of the object. In a 2-alternative forced-choice protocol, participants gripped and lifted a glass under three different frictional conditions, applying a normal force of 3.5 N. After reaching this force, the glass was gradually released to induce slip. During this slipping phase, vibrations scaled according to the static coefficient of friction were presented to users, reflecting the frictional conditions. The results suggested an accuracy of 94.53±3.05 (mean±SD) in perceiving frictional information upon lifting objects with varying friction. The results indicate the effectiveness of using vibrotactile feedback for sensory feedback, allowing users of extra-robotic limbs or fingers to perceive frictional information. This enables them to assess surface properties and adjust grip force according to the frictional conditions, enhancing their ability to grasp and manipulate objects more effectively. Powered By DYNAMIXEL.

View Paper


SOC and Temperature Aware Battery Swapping for an E-Scooter Using a Robotic Arm

Authors: Abeer Daoud, Habibur Rehman, Lotfi Romdhane, Shayok Mukhopadhyay
Institution: The American University of Sharjah
Year: 2025 Field: Manipulator, Self Service

Abstract

The main contribution of this paper is the integration of a battery management system (BMS) to ensure safe battery operation and automated battery swapping for an electric scooter (e-scooter). The BMS constantly monitors the battery state of charge (SOC) and temperature, and initiates battery swapping under predefined conditions. This is crucial because the conventional BMS sometimes fails to detect early signs of potential issues, leading to safety hazards if not addressed promptly. Battery swapping stations are an effective solution, offering an alternative to traditional charging stations by addressing the issue of lengthy charging time. Also, this paper addresses the problem of frequent battery recharging, which limits e-scooters’ operational range. The proposed solution employs a robotic arm to execute battery swaps without human intervention. A computer vision system is utilized to detect an e-scooter’s battery, compensating for any tilt in a parked e-scooter to ensure accurate alignment, thereby enabling the robotic arm to efficiently plan and execute the battery swap. The proposed system requires minimal modifications to the existing e-scooter design by incorporating a specifically designed battery compartment thus offering significant improvements over manual swapping methods. Powered By DYNAMIXEL.

View Paper


Exploring GPT-4 for Robotic Agent Strategy with Real-Time State Feedback and a Reactive Behaviour Framework

Authors: Thomas O’Brien, Ysobel Sims
Institution: University of Newcastle
Year: 2025 Field: AI, LLM

Abstract

We explore the use of GPT-4 on a humanoid
robot in simulation and the real world as proof
of concept of a novel large language model
(LLM) driven behaviour method. LLMs have
shown the ability to perform various tasks, including robotic agent behaviour. The problem
involves prompting the LLM with a goal, and
the LLM outputs the sub-tasks to complete to
achieve that goal. Previous works focus on the
executability and correctness of the LLM’s generated tasks. We propose a method that successfully addresses practical concerns around
safety, transitions between tasks, time horizons
of tasks and state feedback. In our experiments
we have found that our approach produces output for feasible requests that can be executed
every time, with smooth transitions. User requests are achieved most of the time across a
range of goal time horizons. Powered By DYNAMIXEL.

View Paper


A Social Robot for Anxiety Reduction via Deep Breathing

Authors: Kayla Matheus, Marnyel Vazquez, Brian Scassellati
Institution: Yale University
Year: 2025 Field: Assistive Robots

Abstract— In this paper, we introduce Ommie, a novel robot
that supports deep breathing practices for the purposes of
anxiety reduction. The robot’s primary function is to guide
users through a series of extended inhales, exhales, and holds
by way of haptic interactions and audio cues. We present core
design decisions during development, such as robot morphology
and tactility, as well as the results of a usability study in
collaboration with a local wellness center. Interacting with
Ommie resulted in a significant reduction in STAI-6 anxiety
measures, and participants found the robot intuitive, approachable, and engaging. Participants also reported feelings of focus
and companionship when using the robot, often elicited by the
haptic interaction. These results show promise in the robot’s
capacity for supporting mental health. Powered By DYNAMIXEL.

View Paper


Vision-Guided Loco-Manipulation with a Snake Robot

Authors: Adarsh Salagame, Sasank Potluri, Keshav Bharadwaj Vaidyanathan, Kruthika Gangaraju, Eric Sihite, Milad Ramezani, Alireza Ramezani
Institution: Northeastern University
Year: 2025 Field: Loco-Manipulation, Real-Time Object Detection and Reaction Based Tasks

Abstract— This paper presents the development and integration of a vision-guided loco-manipulation pipeline for Northeastern University’s snake robot, COBRA. The system leverages a YOLOv8-based object detection model and depth data from an onboard stereo camera to estimate the 6-DOF pose of target objects in real time. We introduce a framework for autonomous detection and control, enabling closed-loop loco-manipulation for transporting objects to specified goal locations. Additionally, we demonstrate open-loop experiments in which COBRA successfully performs real-time object detection and loco-manipulation tasks. Powered By DYNAMIXEL.

View Paper


RoboPanoptes: The All-seeing Robot with Whole-body Dexterity

Authors: Xiaomeng Xu, Dominik Bauer, Shuran Song
Institution: Columbia University and Stanford University
Year: 2025 Field: Dexterity, Manipulation

Abstract— We present RoboPanoptes¹, a capable yet practical robot system that achieves whole-body dexterity through whole-body vision. Its whole-body dexterity allows the robot to utilize its entire body surface for manipulation, such as leveraging multiple contact points or navigating constrained spaces. Meanwhile, whole-body vision uses a camera system distributed over the robot’s surface to provide comprehensive, multi-perspective visual feedback of its own and the environment’s state. At its
core, RoboPanoptes uses a whole-body visuomotor policy that learns complex manipulation skills directly from human demonstrations, efficiently aggregating information from the distributed cameras while maintaining resilience to sensor failures. Together, these design aspects unlock new capabilities and tasks, allowing RoboPanoptes to unbox in narrow spaces, sweep multiple or oversized objects, and succeed in multi-step stowing in cluttered environments, outperforming baselines in adaptability and efficiency. Powered By DYNAMIXEL.

View Paper


A bio-inspired sand-rolling robot: effect of body shape on sand rolling performance

Authors: Xingjue Liao, Wenhao Liu, Hao Wu, Feifei Qian
Institution: University of Southern California
Year: 2025 · Field: Biomechanics, Locomotion

Abstract—The capability of effectively moving on complex
terrains such as sand and gravel can empower our robots
to robustly operate in outdoor environments, and assist with
critical tasks such as environment monitoring, search-and-rescue,
and supply delivery. Inspired by the Mount Lyell salamander’s
ability to curl its body into a loop and effectively roll down
hill slopes, in this study we develop a sand-rolling robot and
investigate how its locomotion performance is governed by the
shape of its body. We experimentally tested three different body
shapes: Hexagon, Quadrilateral, and Triangle. We found that
Hexagon and Triangle can achieve a faster rolling speed on
sand, but exhibited more frequent failures of getting stuck.
Analysis of the interaction between robot and sand revealed the
failure mechanism: the deformation of the sand produced a local
“sand incline” underneath robot contact segments, increasing the
effective region of supporting polygon (ERSP) and preventing
the robot from shifting its center of mass (CoM) outside the
ERSP to produce sustainable rolling. Based on this mechanism,
a highly-simplified model successfully captured the critical body
pitch for each rolling shape to produce sustained rolling on sand,
and informed design adaptations that mitigated the locomotion
failures and improved robot speed by more than 200%. Our
results provide insights into how locomotors can utilize different
morphological features to achieve robust rolling motion across
deformable substrates. Powered By DYNAMIXEL.

View Paper


Vision-Ultrasound Robotic System based on Deep Learning for Gas and Arc Hazard Detection in Manufacturing

Authors: Jin-Hee Lee, Dahyun Nam, Robin Inho Kee, YoungKey Kim, Seok-Jun Buu
Institution: Gyeongsang National University, Seoul National University, University of Michigan, and SM Instruments Inc.
Year: 2025 Field: Deep Learning, Inspection

Abstract

Gas leaks and arc discharges present significant risks in industrial environments, requiring robust detection systems to ensure safety and operational efficiency. Inspired by human protocols that combine visual identification with acoustic verification, this study proposes a deep learning-based robotic system for autonomously detecting and classifying gas leaks and arc discharges in manufacturing settings. The system is designed to execute all experimental tasks (A, B, C, D) entirely onboard the robot without external computation, demonstrating its capability for fully autonomous operation. Utilizing a 112- channel acoustic camera operating at a 96 kHz sampling rate to capture ultrasonic frequencies, the system processes real-world datasets recorded in diverse industrial scenarios. These datasets include multiple gas leak configurations (e.g., pinhole, open end) and partial discharge types (Corona, Surface, Floating) under varying environmental noise conditions. The proposed system integrates YOLOv5 for visual detection and a beamforming-enhanced acoustic analysis pipeline. Signals are transformed using Short-Time Fourier Transform (STFT) and refined through Gamma Correction, enabling robust feature extraction. An Inception-inspired Convolutional Neural Network further classifies hazards, achieving an unprecedented 99% gas leak detection accuracy. The system not only detects individual hazard sources but also enhances classification reliability by fusing multi-modal data from both vision and acoustic sensors. When tested in reverberation and noise-augmented environments, the system outperformed conventional models by up to 44%p, with experimental tasks meticulously designed to ensure fairness and reproducibility. Additionally, the system is optimized for real-time deployment, maintaining an inference time of 2.1 seconds on a mobile robotic platform. By emulating human-like inspection protocols and integrating vision with acoustic modalities, this study presents an effective solution for industrial automation, significantly improving safety and operational reliability. Powered By DYNAMIXEL.

View Paper


Prismatic-Bending Transformable (PBT) Joint for a Modular, Foldable Manipulator with Enhanced Reachability and Dexterity

Authors: Jianshu Zhou, Junda Huang, Boyuan Liang, Xiang Zhang, Xin Ma, Masayoshi Tomizuka
Institution: UC Berkeley
Year: 2025 · Field: Manipulation, Dexterity 

Abstract— Robotic manipulators, traditionally designed with
classical joint-link articulated structures, excel in industrial applications but face challenges in human-centered and general purpose tasks requiring greater dexterity and adaptability. Addressing these limitations, we introduce the Prismatic-Bending Transformable (PBT) Joint, a novel design inspired by the scissors mechanism, enabling transformable kinematic chains. Each PBT joint module provides three degrees of freedom—bending, rotation, and elongation/contraction—allowing scalable and reconfigurable assemblies to form diverse kinematic configurations tailored to specific tasks. This innovative design surpasses conventional systems, delivering superior flexibility and performance across various applications. We present the design, modeling, and experimental validation of the PBT joint, demonstrating its integration into modular and foldable robotic arms. The PBT joint functions as a single SKU, enabling manipulators to be constructed entirely from standardized PBT joints without additional customized components. It also serves as a modular extension for existing systems, such as wrist modules, streamlining design, deployment, transportation, and maintenance. Three sizes—large, medium, and small—have been developed and integrated into robotic manipulators, highlighting their enhanced dexterity, reachability, and adaptability for manipulation tasks. This work represents a significant advancement in robotic design, offering scalable and efficient solutions for dynamic and unstructured environments. Powered By DYNAMIXEL.

View Paper


Tendon-driven Grasper Design for Aerial Robot Perching on Tree Branches

Authors: Haichuan Li, Ziang Zhao, Ziniu Wu, Parth Potdar, Long Tran, Ali Tahir Karasahin, Shane Windsor, Stephen G. Burrow, Basaran Bahadir Kocer
Institution: University of Bristol, University of Cambridge, Necmettin Erbakan University
Year: 2025 · Field: Aerial Robotics

Abstract— Protecting and restoring forest ecosystems has become an important conservation issue. Although various robots have been used for field data collection to protect forest ecosystems, the complex terrain and dense canopy make the data collection less efficient. To address this challenge, an aerial platform with bio-inspired behaviour facilitated by a bio-inspired mechanism is proposed. The platform spends minimum energy during data collection by perching on tree branches. A raptor inspired vision algorithm is used to locate a tree trunk, and then a horizontal branch on which the platform can perch is identified. A tendon-driven mechanism inspired by bat claws which requires energy only for actuation, secures the platform onto the branch using the mechanism’s passive compliance. Experimental results show that the mechanism can perform perching on branches ranging from 30 mm to 80 mm in diameter. The real-world tests validated the system’s ability to select and adapt to target points, and it is expected to be useful in complex forest ecosystems. Powered By DYNAMIXEL.

View Paper


Kinematics Control of Continuum Robots Based on Screw Theory

Authors: Saeedeh Shekaria, Arman Gholibeikian, S. Ali A. Moosaviana
Institution: Center of Excellence in Robotics and Control, Advanced Robotics and Automated Systems (ARAS) Lab., Department of Mechanical Engineering K. N. Toosi University of Technology Tehran, Iran
Year: 2025 · Field: Continuum Robots

Abstract: Controlling continuum robotic arms presents significant challenges due to their highly nonlinear nature and inherently uncertain and complex structure. This complexity affects the application of continuum arms in various areas such as routing, maneuvering on complex paths, and other applications. This paper addresses a real-time kinematic control of continuum robotic arms using screw theory to develop a controller that offers accuracy, speed, and low computational load for real-time implementation. The inherent flexibility and nonlinear nature of these arms complicate precise position control. To overcome these challenges, we use a PID controller, enhancing the robot’s position control capabilities. Experimentally validated results for the designed path demonstrate the controller’s effectiveness in improving path tracking and real-time control performance. This controller was implemented on the actual RoboArm system, achieving a 6cm error. Powered By DYNAMIXEL.

View Paper


NDOB-Based Control of a UAV with Delta-Arm Considering Manipulator Dynamics

Authors: Hongming Chen, Biyu Ye, Xianqi Liang, Weiliang Deng, Ximin Lyu
Institution: N/A
Year: 2025 · Field: Aerial Robotics

Abstract— Aerial Manipulators (AMs) provide a versatile platform for various applications, including 3D printing, architecture, and aerial grasping missions. However, their operational speed is often sacrificed to uphold precision. Existing control strategies for AMs often regard the manipulator as a disturbance and employ robust control methods to mitigate its influence. This research focuses on elevating the precision of the end-effector and enhancing the agility of aerial manipulator movements. We present a composite control scheme to address these challenges. Initially, a Nonlinear Disturbance Observer (NDOB) is utilized to compensate for internal coupling effects and external disturbances. Subsequently, manipulator dynamics are processed through a high pass filter to facilitate agile movements. By integrating the proposed control method into a fully autonomous delta-arm-based AM system, we substantiate the controller’s efficacy through extensive real-world experiments. The outcomes illustrate that the end-effector can achieve accuracy at the millimeter level. Powered By DYNAMIXEL.

View Paper


Closed-Loop Control and Disturbance Mitigation of an Underwater Multi-Segment Continuum Manipulator

Authors: Kyle L. Walker, Hsing-Yu Chen, Alix J. Partridge, Lucas Cruz da Silva, Adam A. Stokes, Francesco Giorgio-Serchi
Institution: University of Edinburgh, Senai Cimatec, CREATE Lab, EPFL, National Robotarium, Boundary Road North, Heriot Watt University Campus
Year: 2025 · Field: Manipulators

Abstract— The use of soft and compliant manipulators in marine environments represents a promising paradigm shift for subsea inspection, with devices better suited to tasks owing to their ability to safely conform to items during contact. However, limitations driven by material characteristics often restrict the reach of such devices, with the complexity of obtaining state estimations making control non-trivial. Here, a detailed analysis of a 1m long compliant manipulator prototype for subsea inspection tasks is presented, including its mechanical design, state estimation technique, closed-loop control strategies, and experimental performance evaluation in underwater conditions. Results indicate that both the configuration-space and taskspace controllers implemented are capable of positioning the end effector to desired locations, with deviations of <5% of the manipulator length spatially and to within 5o of the desired configuration angles. The manipulator was also tested when subjected to various disturbances, such as loads of up to 300g and random point disturbances, and was proven to be able to limit displacement and restore the desired configuration. This work is a significant step towards the implementation of compliant manipulators in real-world subsea environments, proving their potential as an alternative to classical rigid-link designs. Powered By DYNAMIXEL.

View Paper


3D Printable Gradient Lattice Design for Multi-Stiffness Robotic Fingers

Authors: Siebe J. Schouten, Tomas Steenman, Rens File, Merlijn Den Hartog, Aimee Sakes, Cosimo Della Santina, Kirsten Lussenburg, Ebrahim Shahabi
Institution: European Union’s Horizon Europe Program
Year: 2025 · Field: 3d Printing, Robotic Finger

Abstract— Human fingers achieve exceptional dexterity and adaptability by combining structures with varying stiffness levels, from soft tissues (low) to tendons and cartilage (medium) to bones (high). This paper explores developing a robotic finger with similar multi-stiffness characteristics. Specifically, we propose using a lattice configuration, parameterized by voxel size and unit cell geometry, to optimize and achieve finetuned stiffness properties with high granularity. A significant advantage of this approach is the feasibility of 3D printing the designs in a single process, eliminating the need for manual assembly of elements with differing stiffness. Based on this method, we present a novel, human-like finger, and a soft gripper. We integrate the latter with a rigid manipulator and demonstrate the effectiveness in pick and place tasks. Powered By DYNAMIXEL.

View Paper


BEHAVIOR ROBOT SUITE: Streamlining Real-World Whole-Body Manipulation for Everyday Household Activities

Authors: Yunfan Jiang, Ruohan Zhang, Josiah Wong, Chen Wang, Yanjie Ze, Hang Yin, Cem Gokmen, Shuran Song, Jiajun Wu, Li Fei-Fei
Institution: Stanford University
Year: 2025 · Field: Semi-Humanoid

Abstract— Real-world household tasks present significant challenges for mobile manipulation robots. An analysis of existing robotics benchmarks reveals that successful task performance hinges on three key whole-body control capabilities: bimanual coordination, stable and precise navigation, and extensive end-effector reachability. Achieving these capabilities requires arXiv:2503.05652v1 [cs.RO] 7 Mar 2025 careful hardware design, but the resulting system complexity further complicates visuomotor policy learning. To address these challenges, we introduce the BEHAVIOR ROBOT SUITE (BRS), a comprehensive framework for whole-body manipulation in diverse household tasks. Built on a bimanual, wheeled robot with a 4-DoF torso, BRS integrates a cost-effective wholebody teleoperation interface for data collection and a novel algorithm for learning whole-body visuomotor policies. We evaluate BRS on five challenging household tasks that not only emphasize the three core capabilities but also introduce additional complexities, such as long-range navigation, interaction with articulated and deformable objects, and manipulation in confined spaces. We believe that BRS’s integrated robotic
embodiment, data collection interface, and learning framework mark a significant step toward enabling real-world whole-body manipulation for everyday household tasks. BRS is open-sourced at behavior-robot-suite.github.io. Powered By DYNAMIXEL.

View Paper


Modular Self-Reconfigurable Continuum Robot for General Purpose Loco-Manipulation

Authors: Yilin Cai , Haokai Xu, Yifan Wang , Graduate Student Member, IEEE, Desai Chen, Wojciech Matusik, Wan Shou, and Yue Chen
Institution: Georgia Institute of Technology, Inkbit Inc, Massachusetts Institute of Technology, University of Arkansas, 
Year: 2025 · Field: Self-Reconfigurable Robotics, Manipulation 

Abstract— Modular Self-Reconfigurable Robots offer exceptional adaptability and versatility through reconfiguration, but traditional rigid robot designs lack the compliance necessary for effective interaction with complex environments. Recent advancements in modular soft robots address this shortcoming with enhanced flexibility; however, their designs lack the capability of active self-reconfiguration and heavily rely on manual assembly. In this letter, we present a modular self-reconfigurable soft continuum robotic system featuring a continuum backbone and an omnidirectional docking mechanism. This design enables each module to independently perform loco-manipulation and self-reconfiguration. We then propose a kinetostatic model and conduct a geometrical docking range analysis to characterize the robot’s performance. The reconfiguration process and the distinct motion gait for each configuration are also developed, including rolling, crawling, and snake-like undulation. Experimental demonstrations show that both single and multiple connected modules can achieve successful loco-manipulation, adapting effectively to various environments. Powered By DYNAMIXEL.

View Paper


An Adaptive Data-Enabled Policy Optimization Approach for Autonomous Bicycle Control

Authors: Niklas Persson, Student member, IEEE, Feiran Zhao, Mojtaba Kaheni, Senior Member, IEEE, Florian Dorfler, Senior Member, IEEE, Alessandro V. Papadopoulos, Senior Member, IEEE
Institution: M¨alardalen University, ETH Zurich
Year: 2022 · Field: Adaptive Control, Policy Optimization, Balance Control

Abstract— This paper presents a unified control framework that integrates a Feedback Linearization (FL) controller in the inner loop with an adaptive Data-Enabled Policy Optimization (DeePO) controller in the outer loop to balance an autonomous bicycle. While the FL controller stabilizes and partially linearizes the inherently unstable and nonlinear system, its performance is compromised by unmodeled dynamics and time-varying characteristics. To overcome these limitations, the DeePO controller is introduced to enhance adaptability and robustness. The initial control policy of DeePO is obtained from a finite set of offline, persistently exciting input and state data. To improve stability and compensate for system nonlinearities and disturbances, a robustness-promoting regularizer refines the initial policy, while the adaptive section of the DeePO framework is enhanced with a forgetting factor to improve adaptation to time-varying dynamics. The proposed DeePO+FL approach is evaluated through simulations and real-world experiments on an instrumented autonomous bicycle. Results demonstrate its superiority over the FL-only approach, achieving more precise tracking of the reference lean angle and lean rate. Powered By DYNAMIXEL.

View Paper


Synergy-based robotic quadruped leveraging passivity for natural intelligence and behavioural diversity

Authors: Francesco Stella, Mickaël M. Achkar, Cosimo Della Santina & Josie Hughes
Institution: CREATE Lab, STI, EPFL, Lausanne, Switzerland. Department of Cognitive Robotics, Delft University of Technology, Delft, the Netherlands. Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA. Institute of Robotics and Mechatronics, German Aerospace Center, Wessling, Germany.
Year: 2025 · Field: Quadruped

Abstract
Quadrupedal animals show remarkable capabilities in traversing diverse terrains and display a range of behaviours and gait patterns. Achieving similar performance by exploiting the natural dynamics of the system is a key goal for robotics researchers. Here we show a bioinspired approach to the design of quadrupeds that seeks to exploit the body and the passive properties of the robot while maintaining active controllability on the system through minimal actuation. Utilizing an end-to-end computational design pipeline, neuromechanical couplings recorded in biological quadrupeds are translated into motor synergies, allowing minimal actuation to control the full structure via multijoint compliant mechanical couplings. Using this approach, we develop PAWS, a passive automata with synergies. By leveraging the principles of motor synergies, the design incorporates variable stiffness, anatomical insights and self-organization to simplify control while maximizing its capabilities. The resulting synergy-based quadruped requires only four actuators and exhibits emergent, animal-like dynamical responses, including passive robustness to environmental perturbations and a wide range of actuated behaviours. The finding contributes to the development of machine physical intelligence and provides robots with more efficient and natural-looking robotic locomotion by combining synergistic actuation, compliant body properties and embodied compensatory strategies. Powered By DYNAMIXEL.

View Paper


Autonomous Robotic Pepper Harvesting: Imitation Learning in Unstructured Agricultural Environments

Authors: Chung Hee Kim, Abhisesh Silwal, George Kantor
Institution: Carnegie Mellon University
Year: 2024 · Field: Agriculture 

Abstract— Automating tasks in outdoor agricultural fields poses significant challenges due to environmental variability, unstructured terrain, and diverse crop characteristics. We present a robotic system for autonomous pepper harvesting designed to operate in these unprotected, complex settings. Utilizing a custom handheld shear-gripper, we collected 300 demonstrations to train a visuomotor policy, enabling the system to adapt to varying field conditions and crop diversity. We achieved a success rate of 28.95% with a cycle time of 31.71 seconds, comparable to existing systems tested under more controlled conditions like greenhouses. Our system demonstrates the feasibility and effectiveness of leveraging imitation learning for automated harvesting in unstructured agricultural environments. This work aims to advance scalable, automated robotic solutions for agriculture in natural settings. Powered By DYNAMIXEL.

View Paper


Dexterous Three-Finger Gripper based on Offset Trimmed Helicoids (OTHs)

Authors: Qinghua Guan, Hung Hon Cheng, and Josie Hughes
Institution: CREATE Lab, School of Engineering STI
Year: 2025 · Field: Dexterity, Gripper

Abstract— This study presents an innovative offset-trimmed helicoids (OTH) structure, featuring a tunable deformation center that emulates the flexibility of human fingers. This design significantly reduces the actuation force needed for larger elastic deformations, particularly when dealing with harder materials like thermoplastic polyurethane (TPU). The incorporation of two helically routed tendons within the finger enables both inplane bending and lateral out-of-plane transitions, effectively expanding its workspace and allowing for variable curvature along its length. Compliance analysis indicates that the compliance at the fingertip can be fine-tuned by adjusting the mounting placement of the fingers. This customization enhances the gripper’s adaptability to a diverse range of objects. By leveraging TPU’s substantial elastic energy storage capacity, the gripper is capable of dynamically rotating objects at high speeds, achieving approximately 60° in just 15 milliseconds. The three-finger gripper, with its high dexterity across six degrees of freedom, has demonstrated the capability to successfully perform intricate tasks. One such example is the adept spinning of a rod within the gripper’s grasp. Powered By DYNAMIXEL.

View Paper


Loopy Movements: Emergence of Rotation in a Multicellular Robot

Authors: Trevor Smith, Professor Yu Gu
Institution: West Virginia University
Year: 2024 · Field: Robotic Swarm, Emergent Behavior, Closed Loop

Abstract— Unlike most human-engineered systems, many biological systems rely on emergent behaviors from low-level interactions, enabling greater diversity and superior adaptation to complex, dynamic environments. This study explores emergent decentralized rotation in the Loopy multicellular robot, composed of homogeneous, physically linked, 1-degree-of-freedom cells. Inspired by biological systems like sunflowers, Loopy uses simple local interactions—diffusion, reaction, and active transport of simulated chemicals, called morphogens—without centralized
control or knowledge of its global morphology. Through these interactions, the robot self-organizes to achieve coordinated rotational motion and forms lobes—local protrusions created by clusters of motor cells. This study investigates how these interactions drive Loopy’s rotation, the impact of its morphology, and its resilience to actuator failures. Our findings reveal two distinct behaviors:

Inner valleys between lobes rotate fasterthan the outer peaks, contrasting with rigid body dynamics, and
Cells rotate in the opposite direction of the overall morphology.
The experiments show that while Loopy’s morphology does not
affect its angular velocity relative to its cells, larger lobes increase
cellular rotation and decrease morphology rotation relative to
the environment. Even with up to one-third of its actuators
disabled and significant morphological changes, Loopy maintains
its rotational abilities, highlighting the potential of decentralized,
bio-inspired strategies for resilient and adaptable robotic systems. Powered By DYNAMIXEL.

View Paper


CAFEs: Cable-driven Collaborative Floating End-Effectors for Agriculture Applications

Authors: Hung Hon Cheng and Josie Hughes
Institution: CREATE Lab, School of Engineering STI
Year: 2025 · Field: Biomechanics / Exoskeletons

Abstract— CAFEs (Collaborative Agricultural Floating Endeffectors) is a new robot design and control approach to automating large-scale agricultural tasks. Based upon a cable driven robot architecture, by sharing the same roller-driven cable set with modular robotic arms, a fast-switching clamping mechanism allows each CAFE to clamp onto or release from the moving cables, enabling both independent and synchronized movement across the workspace. The methods developed to enable this system include the mechanical design, precise position control and a dynamic model for the spring-mass liked system, ensuring accurate and stable movement of the robotic arms. The system’s scalability is further explored by studying the tension and sag in the cables to maintain performance as more robotic arms are deployed. Experimental and simulation results demonstrate the system’s effectiveness in tasks including pick and-place showing its potential to contribute to agricultural automation. Powered By DYNAMIXEL.

View Paper


Soft Vision-Based Tactile-Enabled Sixth Finger: Advancing Daily Objects Manipulation for Stroke Survivors

Authors: Basma Hasanen, Mashood M. Mohsan, Abdulaziz Y. Alkayas, Federico Renda and Irfan Hussain
Institution: Khalifa University
Year: 2022 · Field: Supernumerary Robotic Finger, Wearable Robots, Assistive Technologies, Tactile Sensing, Transformers

Abstract— The presence of post-stroke grasping deficiencies
highlights the critical need for the development and implementation of advanced compensatory strategies. This paper introduces a novel system to aid chronic stroke survivors through the development of a soft, vision-based, tactile-enabled extra robotic finger. By incorporating vision-based tactile sensing, the system autonomously adjusts grip force in response to slippage detection. This synergy not only ensures mechanical stability but also enriches tactile feedback, mimicking the dynamics of human-object interactions. At the core of our approach is a transformer-based framework trained on a comprehensive tactile dataset encompassing objects with a wide range of morphological properties, including variations in shape, size, weight, texture, and hardness. Furthermore, we validated the system’s robustness in real-world applications, where it successfully manipulated various everyday objects. The promising results highlight the potential of this approach to improve the quality of life for stroke survivors. Powered By DYNAMIXEL.

View Paper


Robotic System with Tactile-Enabled High-Resolution Hyperspectral Imaging Device for Autonomous Corn Leaf Phenotyping in Controlled Environments

Authors: Xuan Li, Ziling Chen, Raghava Sai Uppuluri, Pokuang Zhou, Tianzhang Zhao, Darrell Zachary Good, Yu She, Jian Jin
Institution: Purdue University
Year: 2025 · Field: Autonomous Corn Phenotyping, Vision-based Tactile Sensing, hyperspectral imaging, Segment Anything Model, agricultural robots

Abstract
Hyperspectral imaging of individual corn leaves provides valuable data for analyzingnutrient content and diagnosing diseases. However, existing leaf-level imagingtechniques face challenges such as low spatial resolution and labor-intensiveprocesses. To address these limitations, this study developed a robotic systemintegrated with a high-resolution line-scanning hyperspectral imaging device toautonomously scan a corn leaf. The hyperspectral imaging device used a vision-basedtactile sensor for active leaf tracking throughout the scanning process, ensuring highimage quality. Additionally, the device incorporated an in-hand leaf manipulationmechanism that ensured the leaf was properly positioned on the tactile sensing area atthe start of every scanning. The scanning process was executed by a robotic armequipped with an RGB-D camera and integrated with the Segment Anything Model(SAM), enabling autonomous leaf detection, localization, grasping, and scanning. Thesystem was tested on V10-stage corn plants and the success rate was 91.4% with anaverage 4.8 seconds for leaf detection and localization and an average leaf scanningtime of 38.3 seconds. Powered By DYNAMIXEL.

View Paper


Variable-Friction In-Hand Manipulation for Arbitrary Objects via Diffusion-Based Imitation Learning

Authors: Qiyang Yan, Zihan Ding, Xin Zhou, and Adam J. Spiers
Institution: Manipulation and and Electronic Touch Lab, Department of Electrical Engineering, Imperial a.spiers@imperial.ac.uk College London
Year: 2025 · Field: Manipulation, Imitation Learning

Abstract— Dexterous in-hand manipulation (IHM) for arbitrary objects is challenging due to the rich and subtle contact process. Variable-friction manipulation is an alternative approach to dexterity, previously demonstrating robust and versatile 2D IHM capabilities with only two single-joint fingers. However, the hard-coded manipulation methods for variable friction hands are restricted to regular polygon objects and limited target poses, as well as requiring the policy to be tailored for each object. This paper proposes an end-to-end learning-based manipulation method to achieve arbitrary object manipulation for any target pose on real hardware, with minimal engineering efforts and data collection. The method features
a diffusion policy-based imitation learning method with cotraining from simulation and a small amount of real-world data. With the proposed framework, arbitrary objects including polygons and non-polygons can be precisely manipulated to reach arbitrary goal poses within 2 hours of training on an A100 GPU and only 1 hour of real-world data collection. The
precision is higher than previous customized object-specific policies, achieving an average success rate of 71.3% with average pose error being 2.676 mm and 1.902◦. Powered By DYNAMIXEL.

View Paper


Global-Local Interface for On-Demand Teleoperation

Authors: Jianshu Zhou, Boyuan Liang, Junda Huang, Ian Zhang, Pieter Abbeel, Masayoshi Tomizuka
Institution: University of California, Berkeley
Year: 2025 · Field: Teleoperation, Imitation Learning 

Abstract: Teleoperation is a critical method for human-robot interface, holds significant potential for enabling robotic applications in industrial and unstructured environments. Existing teleoperation methods have distinct strengths and limitations in flexibility, range of workspace and precision. To fuse these advantages, we introduce the Global-Local (G-L) Teleoperation Interface. This interface decouples robotic teleoperation into global behavior, which ensures the robot motion range and intuitiveness, and local behavior, which enhances human operator’s dexterity and capability for performing fine tasks. The G-L interface enables efficient teleoperation not only for conventional tasks like pick-and-place, but also for challenging fine manipulation and large-scale movements. Based on the G-L interface, we constructed a single-arm and a dual-arm teleoperation system with different remote control devices, then demonstrated tasks requiring large motion range, precise manipulation or dexterous end-effector control. Extensive experiments validated the user-friendliness, accuracy, and generalizability of the proposed interface. Powered By DYNAMIXEL.

View Paper


ExoKit: A Toolkit for Rapid Prototyping of Interactions for Arm-based Exoskeletons

Authors: Marie Muehlhaus, Alexander Liggesmeyer, Jürgen Steimle
Institution: Saarland University, Saarland in Germany
Year: 2025 · Field: Biomechanics / Exoskeletons

Abstract - Exoskeletons open up a unique interaction space that seamlessly
integrates users’ body movements with robotic actuation. Despite
its potential, human-exoskeleton interaction remains an underexplored area in HCI, largely due to the lack of accessible prototyping tools that enable designers to easily develop exoskeleton designs and customized interactive behaviors. We present ExoKit, a do-it yourself toolkit for rapid prototyping of low-fidelity, functional exoskeletons targeted at novice roboticists. ExoKit includes modular hardware components for sensing and actuating shoulder and elbow joints, which are easy to fabricate and (re)configure for customized functionality and wearability. To simplify the programming of interactive behaviors, we propose functional abstractions that encapsulate high-level human-exoskeleton interactions. These can be readily accessed either through ExoKit’s command-line or graphical user interface, a Processing library, or microcontroller firmware, each targeted at different experience levels. Findings from implemented application cases and two usage studies demonstrate the versatility and accessibility of ExoKit for early-stage interaction design. Powered By DYNAMIXEL.

View Paper


Embodied design for enhanced flipper-based locomotion in complex terrains

Authors: Nnamdi C. Chikere, John Simon McElroy & Yasemin Ozkan-Aydin
Institution: University of Notre Dame
Year: 2025 · Field: Inspection, Aquatics

Abstract: Robots are becoming increasingly essential for traversing complex environments such as disaster areas, extraterrestrial terrains, and marine environments. Yet, their potential is often limited by mobility and adaptability constraints. In nature, various animals have evolved finely tuned designs and anatomical features that enable efficient locomotion in diverse environments. Sea turtles, for instance, possess specialized flippers that facilitate both long-distance underwater travel and adept maneuvers across a range of coastal terrains. Building on the principles of embodied intelligence and drawing inspiration from sea turtle hatchings, this paper examines the critical interplay between a robot’s physical form and its environmental interactions, focusing on how morphological traits and locomotive behaviors affect terrestrial navigation. We present a bioinspired robotic system and study the impacts of flipper/body morphology and gait patterns on its terrestrial mobility across diverse terrains ranging from sand to rocks. Evaluating key performance metrics such as speed and cost of transport, our experimental results highlight adaptive designs as crucial for multi-terrain robotic mobility to achieve not only speed and efficiency but also the versatility needed to tackle the varied and complex terrains encountered in real-world applications. Powered By DYNAMIXEL.

View Paper


FACTR: Force-Attending Curriculum Training for Contact-Rich Policy Learning

Authors: Jason Jingzhou Liu, Yulong Li, Kenneth Shaw, Tony Tao, Ruslan Salakhutdinov, Deepak Pathak
Institution: Carnegie Mellon University
Year: 2025 · Field: Curriculum Training

Abstract: Many contact-rich tasks humans perform, such as box pickup or rolling dough, rely on force feedback for reliable execution. However, this force information, which is readily available in most robot arms, is not commonly used in teleoperation and policy learning. Consequently, robot behavior is often limited to quasi-static kinematic tasks that do not require intricate force-feedback. In this paper, we first present a low-cost, intuitive, bilateral teleoperation setup that relays external forces of the follower arm back to the teacher arm, facilitating data collection for complex, contact-rich tasks. We then introduce FACTR, a policy learning method that employs a curriculum which corrupts the visual input with decreasing intensity throughout training. The curriculum prevents our transformer-based policy from over-fitting to the visual input and guides the policy to properly attend to the force modality. We demonstrate that by fully utilizing the force information, our method significantly improves generalization to unseen objects by 43% compared to baseline approaches without a curriculum. Powered By DYNAMIXEL.

View Paper


ALPHA-α and Bi-ACT Are All You Need: Importance of Position and Force Information/Control for Imitation Learning of Unimanual and Bimanual Robotic Manipulation with Low-Cost System

Authors: Masato Kobayashi, Thanpimon Buamanee, Takumi Kobayashi
Institution: Osaka University
Year: 2025 · Field: Biomechanics / Exoskeletons

Abstract—Autonomous manipulation in everyday tasks requires flexible action generation to handle complex, diverse real world environments, such as objects with varying hardness and softness. Imitation Learning (IL) enables robots to learn complex tasks from expert demonstrations. However, a lot of existing methods rely on position/unilateral control, leaving challenges in tasks that require force information/control, like carefully grasping fragile or varying-hardness objects. As the need for diverse controls increases, there are demand for low-cost bimanual robots that consider various motor inputs. To address these challenges, we introduce Bilateral Control-Based Imitation Learning via Action Chunking with Transformers(Bi-ACT) and”A” ”L”owcost ”P”hysical ”Ha”rdware Considering Diverse Motor Control Modes for Research in Everyday Bimanual Robotic Manipulation (ALPHA-α). Bi-ACT leverages bilateral control to utilize both position and force information, enhancing the robot’s adaptability to object characteristics such as hardness, shape, and weight. The concept of ALPHA-α is affordability, ease of use, reparability, ease of assembly, and diverse control modes (position, velocity, torque), allowing researchers/developers to freely build control systems using ALPHA-α. In our experiments, we conducted a detailed analysis of Bi-ACT in unimanual manipulation tasks, confirming its superior performance and adaptability compared to Bi-ACT without force control. Based on these results, we applied Bi-ACT to bimanual manipulation tasks using ALPHAα. Experimental results demonstrated high success rates in coordinated bimanual operations across multiple tasks, verifying the effectiveness of our approach in complex real-world scenarios. The effectiveness of the Bi-ACT and ALPHA-α can be seen through comprehensive real-world experiments. Powered By DYNAMIXEL.

View Paper


ToddlerBot: Open-Source ML-Compatible Humanoid Platform for Loco-Manipulation (Stanford University)

Authors: Haochen Shi, Weizhuo Wang, Shuran Song, C. Karen Liu
Institution: Stanford University
Year: 2025 · Field: Humanoids

Abstract—Learning-based robotics research driven by data demands a new approach to robot hardware design—one that serves as both a platform for policy execution and a tool for embodied data collection to train policies. We introduce ToddlerBot, a low-cost, open-source humanoid robot platform designed for scalable policy learning and research in robotics and AI. ToddlerBot enables seamless acquisition of high-quality simulation and real-world data. The plug-and-play zero-point calibration and transferable motor system identification ensure a high-fidelity digital twin, enabling zero-shot policy transfer from simulation to the real-world. A user-friendly teleoperation interface facilitates streamlined real-world data collection for learning motor skills from human demonstrations.

Utilizing its data collection ability and anthropomorphic design, ToddlerBot
is an ideal platform to perform whole-body loco-manipulation. Additionally, ToddlerBot’s compact size (0.56 m, 3.4 kg) ensures safe operation in real-world environments. Reproducibility is achieved with an entirely 3D-printed, open-source design and commercially available components, keeping the total cost under 6000 USD. Comprehensive documentation allows assembly and maintenance with basic technical expertise, as validated by a successful independent replication of the system. We demonstrate ToddlerBot’s capabilities through arm span, payload, endurance tests, loco-manipulation tasks, and a collaborative long-horizon scenario where two robots tidy a toy session together. By advancing ML-compatibility, capability, and reproducibility, ToddlerBot provides a robust platform for scalable learning and dynamic policy execution in robotics research. Powered By DYNAMIXEL.

View Paper


APPLE - ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot

Authors: Yuhan Yu, Peide Huang, Mouli Sivapurapu, and Jian Zhang
Institution: Apple
Year: 2025 · Field: Non-anthropomorphic Robot

Abstract: Nonverbal behaviors such as posture, gestures, and gaze are essential for conveying internal states, both consciously and unconsciously, in human interaction. For robots to interact more naturally with humans, robot movement design should likewise integrate expressive qualities—such as intention, attention, and emotions—alongside traditional functional considerations like task fulfillment, spatial constraints, and time efficiency. In this paper, we present the design and prototyping of a lamp-like robot that explores the interplay between functional and expressive objectives in movement design. Using a research-through-design methodology, we document the hardware design process, define expressive movement primitives, and outline a set of interaction scenario storyboards. We propose a framework that incorporates both functional and expressive utilities during movement generation, and implement the robot behavior sequences in different function- and social oriented tasks. Through a user study comparing expression-driven versus function-driven movements across six task scenarios, our findings indicate that expression-driven movements significantly enhance user engagement and perceived robot qualities. This effect is especially pronounced in social-oriented tasks. Powered By DYNAMIXEL.

View Paper


VMP Versatile Motion Priors for Robustly Tracking Motion on Physical Characters - Research Paper

Authors: Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, Moritz Bacher
Institution: ETH Zurich, Switzerland & Disney Research, Switzerland
Year: 2024 · Field: Kinematics, Motion Mapping

Abstract:

Recent progress in physics-based character control has made it possible to learn policies from unstructured motion data. However, it remains challenging to train a single control policy that works with diverse and unseen motions, and can be deployed to real-world physical robots. In this paper, we propose a two-stage technique that enables the control of a character with a full-body kinematic motion reference, with a focus on imitation accuracy. In a first stage, we extract a latent space encoding by training a variational autoencoder, taking short windows of motion from unstructured data as input. We then use the embedding from the time-varying latent code to train a conditional policy in a second stage, providing a mapping from kinematic input to dynamics-aware output. By keeping the two stages separate, we benefit from self-supervised methods to get better latent codes and explicit imitation rewards to avoid mode collapse. We demonstrate the efficiency and robustness of our method in simulation, with unseen user-specified motions, and on a bipedal robot, where we bring dynamic motions to the real world. Powered By DYNAMIXEL.

View Paper


Design and Control of a Bipedal Robotic Character

Authors: Ruben Grandia, Espen Knoop, Michael A. Hopkins, Georg Wiedebach, Jared Bishop, Steven Pickles, David Muller, and Moritz Bacher,
Institution: Disney Research, Switzerland, Disney Research, USA, Walt Disney Imagineering R&D, USA
Year: 2024 · Field: Bi-pedal Robot, Duck Droid

Abstract

Legged robots have achieved impressive feats in dynamic locomotion in challenging unstructured terrain. However, in entertainment applications, the design and control of these robots face additional challenges in appealing to human audiences. This work aims to unify expressive, artist-directed motions and robust dynamic mobility for legged robots. To this end, we introduce a new bipedal robot, designed with a focus on character-driven mechanical features. We present a reinforcement learning-based control architecture to robustly execute artistic motions conditioned on command signals. During runtime, these command signals are generated by an animation engine which composes and blends between multiple animation sources. Finally, an intuitive operator interface enables real-time show performances with the robot. The complete system results in a believable robotic character, and paves the way for enhanced human-robot engagement in various contexts, in entertainment robotics and beyond. Powered By DYNAMIXEL.

View Paper


ALOHA 2: An Enhanced Low-Cost Hardware for Bimanual Teleoperation

Authors: Jorge Aldaco, Travis Armstrong, Robert Baruch, Jeff Bingham, Sanky Chan, Kenneth Draper, Debidatta Dwibedi, Chelsea Finn, Pete Florence, Spencer Goodrich, Wayne Gramlich, Torr Hage, Alexander Herzog, Jonathan Hoech, Thinh Nguyen, Ian Storz, Baruch Tabanpour, Leila Takayama, Jonathan Tompson, Ayzaan Wahid, Ted Wahrburg, Sichun Xu, Sergey Yaroshenko, Kevin Zakka, Tony Zhao
Institution: Google Deepmind, Stanford, Hoku Labs
Year: 2024 · Field: Teleoperation, Bimanual Robot

Abstract
Diverse demonstration datasets have powered significant advances in robot learning, but the dexterity and scale of such data can be limited by the hardware cost, the hardware robustness, and the ease of teleoperation. We introduce ALOHA 2, an enhanced version of ALOHA that has greater performance, ergonomics, and robustness compared to the original design. To accelerate research in large-scale bimanual manipulation, we open source all hardware designs of ALOHA 2 with a detailed tutorial, together with a MuJoCo model of ALOHA 2 with system identification. Powered By DYNAMIXEL.

View Paper


Stanford University: Mobile ALOHA Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation

Authors: Zipeng Fu, Tony Z. Zhao, Chelsea Finn
Institution: ABC Research Lab
Year: 2022 · Field: Bimanual Robot, Teleoperation

Abstract: Imitation learning from human demonstrations has shown impressive performance in robotics. However, most results focus on table-top manipulation, lacking the mobility and dexterity necessary for generally useful tasks. In this work, we develop a system for imitating mobile manipulation tasks that are bimanual and require whole-body control. We first present Mobile ALOHA, a low-cost and whole-body teleoperation system for data collection. It augments the ALOHA system [104] with a mobile base, and a whole-body teleoperation interface. Using data collected with Mobile ALOHA, we then perform supervised behavior cloning and find that co-training with existing static ALOHA datasets boosts performance on mobile manipulation tasks. With 50 demonstrations for each task, co-training can increase success rates by up to 90%, allowing Mobile ALOHA to autonomously complete complex mobile manipulation tasks such as sautéing and serving a piece of shrimp, opening a two-door wall cabinet to store heavy cooking pots, calling and entering an elevator, and lightly rinsing a used pan using a kitchen faucet. Powered By DYNAMIXEL.

View Paper


LEAP Hand: Low-Cost, Efficient, and Anthropomorphic Hand for Robot Learning

Authors: Kenneth Shaw, Ananye Agrawal, Deepak Pathak
Institution: Carnegie Mellon University
Year: 2023 · Field: Manipulation, Gripper

Abstract

Dexterous manipulation has been a long-standing challenge in robotics. While machine learning techniques have shown some promise, results have largely been currently limited to simulation. This can be mostly attributed to the lack of suitable hardware. In this paper, we present LEAP Hand, a low-cost dexterous and anthropomorphic hand for machine learning research. In contrast to previous hands, LEAP Hand has a novel kinematic structure that allows maximal dexterity regardless of finger pose. LEAP Hand is low-cost and can be assembled in 4 hours at a cost of 2000 USD from readily available parts. It is capable of consistently exerting large torques over long durations of time. We show that LEAP Hand can be used to perform several manipulation tasks in the real world—from visual teleoperation to learning from passive video data and sim2real. LEAP Hand significantly outperforms its closest competitor Allegro Hand in all our experiments while being 1/8th of the cost. Powered By DYNAMIXEL.

View Paper


BridgeData V2: A Dataset for Robot Learning at Scale

Authors: Homer Walke, Kevin Black, Abraham Lee, Moo Jin Kim, Max Du, Chongyi Zheng, Tony Zhao, Philippe Hansen-Estruch, Quan Vuong, Andre He, Vivek Myers, Kuan Fang, Chelsea Finn, Sergey Levine
Institution: UC Berkeley, Stanford, Google DeepMind, CMU
Year: 2024 · Field: Data, Robot Learning

BridgeData V2 is a diverse dataset of robotic manipulation behaviors designed to facilitate research in scalable robot learning. It includes over 60,000 trajectories, consisting of teleoperated demonstrations and scripted pick-and-place policy rollouts. The dataset covers a wide range of tasks in various environments, with each trajectory labeled with a corresponding natural language instruction. The dataset is compatible with open-vocabulary, multi-task learning methods conditioned on goal images or natural language instructions. Powered By DYNAMIXEL.

View Paper


Development of an inexpensive 3D clinostat and comparison with other microgravity simulators using Mycobacterium marinum

Authors: Joseph L. ClaryJoseph L. Clary1Creighton S. FranceCreighton S. France1Kara LindKara Lind1Runhua ShiRunhua Shi2J.Steven AlexanderJ.Steven Alexander1Jeffrey T. Richards,Jeffrey T. Richards3,4Rona S. Scott,Rona S. Scott5,6Jian WangJian Wang6Xiao-Hong LuXiao-Hong Lu7Lynn Harrison
Lynn Harrison1*
Institution: 1Department of Molecular and Cellular Physiology, Louisiana State University Health Sciences Center, Shreveport, LA, United States
2Department of Medicine and the Feist-Weiller Cancer Center, LSU Health Sciences Center, Shreveport, LA, United States
3NASA John F. Kennedy Space Center, Merritt Island, FL, United States
4LASSO Contract, Amentum, Inc, Germantown, MD, United States
5Department of Microbiology and Immunology, Louisiana State University Health Sciences Center, Shreveport, LA, United States
6Center of Applied Immunology and Pathological Processes, Bioinformatics Modeling Core, Louisiana State University Health Sciences Center, Shreveport, LA, United States
7Department of Pharmacology, Toxicology and Neuroscience, Louisiana State University Health Sciences Center, Shreveport, LA, United States
Year: 2022 · Field: Simulation

A joint group of researchers developed an inexpensive 3D Clinostat that can simulate microgravity for biological experiments. The optimal combination of frame velocities for microgravity simulation was determined through computer modeling, with a combination of inner and outer frame velocities predicted to produce the best results. The 3D clinostat was compared to commercially available microgravity simulators, and it was found that it produced similar results in biofilm experiments and transcriptome changes in bacteria compared to the RPM 2.0 simulator. The study validates the use of the inexpensive 3D clinostat and highlights the importance of testing operating conditions with biological experiments. Powered By DYNAMIXEL.

View Paper


Google Deepmind - Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning

Authors: Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Markus Wulfmeier, Jan Humplik, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess
Institution: Google, Google Deepmind Team
Year: 2023 · Field: Reinforcement Learning, Bipedal Robot, OP3

We investigate whether Deep Reinforcement Learning (Deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies in dynamic environments. We used Deep RL to train a humanoid robot with 20 actuated joints to play a simplified one-versus-one (1v1) soccer game. We first trained individual skills in isolation and then composed those skills end-to-end in a self-play setting. The resulting policy exhibits robust and dynamic movement skills such as rapid fall recovery, walking, turning, kicking and more; and transitions between them in a smooth, stable, and efficient manner—well beyond what is intuitively expected from the robot. The agents also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots. The full range of behaviors emerged from a small set of simple rewards. Our agents were trained in simulation and transferred to real robots zero-shot. We found that a combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training in simulation enabled good-quality transfer, despite significant unmodeled effects and variations across robot instances. Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training led the robots to learn safe and effective movements while still performing in a dynamic and agile way. Indeed, even though the agents were optimized for scoring, in experiments they walked 156% faster, took 63% less time to get up, and kicked 24% faster than a scripted baseline, while efficiently combining the skills to achieve the longer term objectives. Powered By DYNAMIXEL.

View Paper


brachIOplexus: Myoelectric Training Software for Clinical and Research Applications

Authors: 
Institution: BLINC Lab at the University of Alberta
Year: N/A · Field: myoelectric; training; robotic arm; powered prosthesis; electromyography; EMG; person with amputation; rehabilitation

Various control strategies are now available for myoelectric devices. The selection of the most appropriate strategy for an individual patient and training to improve their skills are important components to optimize user function with their myoelectric prosthesis. Existing myoelectric training software is often limited by not providing enough features to allow prosthesis users to try the multiple options for prosthetic hands, wrists, and elbows and the various control strategies used to modulate or switch between them. To address this gap, we developed an open-source training software for clinical and research applications called brachI/Oplexus that aims to provide a wider breadth of options and also be easy to use by non-technical users. The software supports several input devices (EMG systems), output devices (robotic arms), and methods for mapping between them (conventional and machine learning controllers). A comparison was performed between brachI/Oplexus and two commercial myoelectric software programs. Results from the testing showed that brachI/Oplexus had similar or slightly improved EMG signal separation and delay when compared to the commercial software. Several research labs and hospitals are already using this software, and by releasing it open source, we hope to lower the barrier of entry and encourage other clinicians and researchers to explore this area. Powered By DYNAMIXEL.

View Paper


Development of the Bento Arm: An Improved Robotic Arm for Myoelectric Training and Research

Authors: 
Institution: BLINC Lab at the University of Alberta
Year: N/A· Field: robotic arm; powered prosthesis; myoelectric; electromyography; EMG; amputee; training; rehabilitation

Abstract
The Myoelectric Training Tool (MTT) was developed to assess and train upper-limb prosthesis users in how to use their electromyography (EMG) signals prior to being fit with their myoelectric prostheses. The original MTT included a desk mounted off-the-shelf robotic arm, electromyography (EMG) acquisition system, EMG controller, and graphical user interface. Previously, the MTT was used in several studies related to investigating clinical training protocols, novel machine learning controllers, and sensory feedback systems. During these studies certain limitations were discovered in the MTT’s off-the-shelf robotic arm. To overcome these issues, an improved robotic arm, the Bento Arm, was designed specifically for myoelectric training and research applications. The Bento Arm includes 5 degrees of freedom similar to those available in commercial prostheses and was designed to be 1:1 scale with anatomical proportions. The MX-series of Dynamixel actuators were selected to allow for continuous payloads of up to 0.3 kg and include integrated position and velocity joint feedback and control. Anthropomorphic arm shells were designed using 3D scanning technology to improve the aesthetics of the arm and allow it to be more easily visualized as an arm or prosthesis. The arm can be desk mounted or interfaced to a transhumeral socket and worn by persons with limb difference. Future work will focus on machining the final parts out of aluminium, creating an array of custom grippers to go along with the arm, designing a wearable controller, and improving the software interfaces. Powered By DYNAMIXEL.

View Paper


IEEE Transactions on Robotics: Lie Group Formulation and Sensitivity Analysis for Shape Sensing of Variable Curvature Continuum Robots With General String Encoder Routing

Authors: Garrison Johnston, Elan Z. Ahronovich , and Nabil Simaan
Institution: N/A
Year: N/A · Field: Continuum robots, human–robot collaboration,
Lie group methods, shape sensing, soft robots

Abstract—

This article considers a combination of actuation tendons and
measurement strings to achieve accurate shape sensing and
direct kinematics of continuum robots. Assuming general string
routing, a methodical Lie group formulation for the
shape sensing of these robots is presented. The shape kinematics
is expressed using arc-length-dependent curvature distributions
parameterized by modal functions, and the Magnus expansion for
Lie group integration is used to express the shape as a product of
exponentials. The tendon and string length kinematic constraints
are solved for the modal coefficients and the configuration space
and body Jacobian are derived. The noise amplification index for
the shape reconstruction problem is defined and used for optimizing
the string/tendon routing paths, and a planar simulation study
shows the minimal number of strings/tendons needed for accurate
shape reconstruction. A torsionally stiff continuum segment is
used for experimental evaluation, demonstrating mean (maximal)
end-effector absolute position error of less than 2% (5%) of total
length. Finally, a simulation study of a torsionally compliant segment
demonstrates the approach for general deflections and string
routings. We believe that the methods of this article can benefit the
design process, sensing, and control of continuum and soft robots. Powered By DYNAMIXEL.


The Dragonfly Spectral Line Mapper: Design and First Light

Authors: Seery Chen & Team
Institution: University of Toronto
Year: 2022 · Field: Biomechanics / Exoskeletons

ABSTRACT

The Dragonfly Spectral Line Mapper (DSLM) is the latest evolution of the Dragonfly Telephoto Array, which turns it into the world’s most powerful wide-field spectral line imager. The DSLM will be the equivalent of a 1.6m aperture f/0.26 refractor with a built-in Integral Field Spectrometer, covering a five square degree field of view. The new telescope is designed to carry out ultra-narrow bandpass imaging of the low surface brightness universe with exquisite control over systematic errors, including real-time calibration of atmospheric variations in airglow. The key to Dragonfly’s transformation is the “Filter-Tilter”, a mechanical assembly which holds ultra-narrow bandpass interference filters in front of each lens in the array and tilts them to smoothly shift their central wavelength. Here we describe our development process based on rapid prototyping, iterative design, and mass production. This process has resulted in numerous improvements to the design of the DSLM from the initial pathfinder instrument, including changes to narrower bandpass filters and the addition of a suite of calibration filters for continuum light subtraction and sky line monitoring. Improvements have also been made to the electronics and hardware of the array, which improve tilting accuracy, rigidity and light baffling. Here we present laboratory and on-sky measurements from the deployment of the first bank of lenses in May 2022, and a progress report on the completion of the full array in early 2023. Powered By DYNAMIXEL.


Aerial Grasping and the Velocity Sufficiency Region

Authors: Tony G. Chen , Kenneth A. W. Hoffmann , Jun En Low, Keiko Nagami, David Lentink, and Mark R. Cutkosky
Institution: Stanford University, University
of Groningen
Year: 2022 · Field: Aerial, Manipulation

A largely untapped potential for aerial robots is to capture airborne targets in flight. We present an approach in which a simple dynamic model of a quadrotor/target interaction leads to the design of a gripper and associated velocity sufficiency region with a high probability of capture. A model of the interaction dynamics maps the gripper force sufficiency region to an envelope of relative velocities for which capture should be possible without exceeding the capabilities of the quadrotor controller. The approach motivates a gripper design that emphasizes compliance and is passively triggered for a fast response. The resulting gripper is lightweight (23 g) and closes within 12 ms. With this gripper, we demonstrate in-flight experiments that a 550 g drone can capture an 85 g target at various relative velocities between 1 m/s and 2.7 m/s. Powered By DYNAMIXEL.

View Paper


A comparison study on the dynamic control of OpenMANIPULATOR-X by PD with gravity compensation tuned by oscillation damping based on the phase-trajectory-length concept

Authors: Amirhossein Dadbin, Professor Ahmad Kalhor, and Professor Medhi Tale Masouleh
Institution: University of Tehran
Year: 2022 · Field: Dynamics, kinematics, OpenMANIPULATORX,

Abstract—
In this paper, the dynamic control of a 4-DOF
serial manipulator, the so-called OpenMANIPULATOR-x, is
investigated by means of different performance indices, including,
among others, oscillation damping. To do so, the kinematics and
dynamics equations are obtained from a systematic approach
where both models are verified by simulating the under study
robot in Simscape. The stability of the applied controller, which
is a PD with gravity compensation controller, is investigated,
and it reveals that it is asymptotically stable. Thereafter, the
coefficient of the latter PD controller is first optimized by means
of different performance indices, namely, IAE, ISE, ITAE, ITSE
then, a new criterion called oscillation damping, which is based on
the optimization of a cost function defined on the phase trajectory
length concept is used in order to evaluate the performance of
the implemented controller. The obtained results revealed that the
step response of the oscillation damping controller eliminates the
overshoot, but it is slower than ones tuned by other performance
indices. Powered By DYNAMIXEL.

View Paper


An Affordable Robotic Arm Design for Manipulation in Tabletop Clutter

Authors: Ayberk ¨Ozgur and H. Levent Akın
Institution: Bogazic University
Year: 2013 · Field: Biomechanics / Exoskeletons

In this study, tabletop object manipulation in a cluttered environment using a robotic manipulator is considered. Our focus is on avoiding or manipulating other obstructions/objects to achive the given task. We propose an affordable robotic manipulator design with five degrees of freedom where the gripper constitutes one additional degree of freedom. The manipulator has approximately the same length as an adult human’s arm. An accurate simulation model for ROS/Gazebo along with a preliminary motion controller is also presented.

This academic paper features our DYNAMIXEL AX-12A, MX-28T, MX-64T, and MX-106T all-in-one smart actuators. Powered By DYNAMIXEL.

View Paper


Design and Implementation of a Pair of Robot Arms with Bilateral Teleoperation

Authors: Alexander Hattori
Institution: Massachusetts Institute of Technology
Year: 2019· Field: Bilateral Teleoperation

Abstract

Modern robotics has progressed in the manufacturing industry so that many manual labor tasks in assembly lines can be automated by robots with high speed and high positional accuracy. However, these robots typically cannot perform tasks that require perception or disturbance rejection. Humans are still needed in factories due to their innate ability to understand situations and react accordingly. Teleoperated robots can allow human perception to be combined with the dexterity and safety of a robot as long as the user interface and controls are carefully designed to avoid hindering the operator. Force feedback bilateral teleoperation is one method for providing users with an intuitive user interface and feedback. This thesis documents the design, construction, and implementation of a pair of bilateral teleoperated robotic forearms, each consisting of a 2 degree of freedom wrist, and a gripper. The forearm uses commercial off the shelf actuators in order to keep cost and additional development time low, while also testing the feasibility of using non-custom actuators. Development of the forearms included design and manufacturing of the mechanical assemblies, implementation of high-speed communication protocol, and tuning of control parameters. Powered By DYNAMIXEL.

View Paper