Contact constraints arise naturally in many robot planning problems. In recent years, a variety of contact-implicit trajectory optimization algorithms have been developed that avoid the pitfalls of mode pre-specification by simultaneously optimizing state, input, and contact force trajectories. However, their reliance on first-order integrators leads to a linear tradeoff between optimization problem size and plan accuracy. To address this limitation, we propose a new family of trajectory optimization algorithms that leverage ideas from discrete variational mechanics to derive higher-order generalizations of the classic time-stepping method of Stewart and Trinkle. By using these dynamics formulations as constraints in direct trajectory optimization algorithms, it is possible to perform contact-implicit trajectory optimization with significantly higher accuracy. For concreteness, we derive a second-order method and evaluate it using several simulated rigid body systems including an underactuated biped and a quadruped.
The Harvard RoboBee is a controlled flapping-wing vehicle which can generate lift force and body torques based on different flapping schemes. One of the challenges in the controller design is that the center of pressure (CoP) of aerodynamic drag is not collocated with the center of mass of the vehicle, which creates additional nonlinear coupling between translational and angular velocities. In this paper, an almost globally asymptotically stable (AGAS) tracking controller is presented by exploiting passive aerodynamic effects to stabilize the attitude dynamics. First, global attitude stability to a vertical orientation in the world frame is shown for an unforced system, which illustrates that the aerodynamic damping on the CoP passively stabilizes the system to align the body vertically in the world frame. Next, a new coordinate system is proposed using a near-identity diffeomorphism that admits a partial feedback linearization with almost globally stable zero dynamics. The behavior of the zero dynamics resembles the dynamics of a 3D pendulum with an aerodynamic damper. Finally, an exponentially stabilizing output tracking controller is proposed with an ultimate bound on the full state dynamics. A variation of LaSalle's invariance principle that does not require a compact forward invariant set is considered and used in the proof of AGAS. Simulation results of the RoboBee tracking a Lissajous curve flight trajectory are provided.
Recent human-in-the-loop (HIL) optimization studies using wearable devices have shown an improved average metabolic reduction by optimizing a small number of control parameters during short-duration walking experiments. However, the slow metabolic dynamics, high measurement noise, and experimental time constraints create challenges for increasing the number of control parameters to be optimized. Prior work applying gradient descent and Bayesian optimization to this problem have decoupled metabolic estimation and control parameter selection using fixed estimation intervals, which imposes a hard limit on the number of parameter evaluations possible in a given time budget. In this work, we take a different approach that couples estimation and parameter selection, allowing the algorithm to spend less time on refining the metabolic estimates for parameters that are unlikely to improve performance over the best observed values. Our approach uses a Kalman filter-based metabolic estimator to formulate an optimal stopping problem during the data acquisition step of standard Bayesian optimization. Performance was analyzed in numerical simulations and in pilot human subject testing with two subjects that involved optimizing six control parameters of a single-joint exosuit and four parameters of a multi-joint exosuit.
Contact constraints arise naturally in many robot planning problems. In recent years, a variety of contact-implicit trajectory optimization algorithms have been developed that avoid the pitfalls of mode pre-specification by simultaneously optimizing state, input, and contact force trajectories. However, their reliance on first-order integrators leads to a linear tradeoff between optimization problem size and plan accuracy. To address this limitation, we propose a new family of trajectory optimization algorithms that leverage ideas from discrete variational mechanics to derive higher-order generalizations of the classic time-stepping method of Stewart and Trinkle. By using these dynamics formulations as constraints in direct trajectory optimization algorithms, it is possible to perform contact-implicit trajectory optimization with significantly higher accuracy. For concreteness, we derive a second-order method and evaluate it using several simulated rigid body systems, including an underactuated biped and a quadruped. In addition, we use this second-order method to plan locomotion trajectories for a complex quadrupedal microrobot. The planned trajectories are evaluated on the physical platform and result in a number of performance improvements.
Parallelism can be used to significantly increase the throughput of computationally expensive algorithms. With the widespread adoption of parallel computing platforms such as GPUs, it is natural to consider whether these architectures can benefit robotics researchers interested in solving trajectory optimization problems online. Differential Dynamic Programming (DDP) algorithms have been shown to achieve some of the best timing performance in robotics tasks by making use of optimized dynamics methods and CPU multi-threading. This paper aims to analyze the benefits and tradeoffs of higher degrees of parallelization using a multiple-shooting variant of DDP implemented on a GPU. We describe our implementation strategy and present results demonstrating its performance compared to an equivalent multi-threaded CPU implementation using several benchmark control tasks. Our results suggest that GPU-based solvers can offer increased per-iteration computation time and faster convergence in some cases, but in general tradeoffs exist between convergence behavior and degree of parallelism.
Contact interactions are central to robot manipulation and locomotion behaviors. State estimation techniques that explicitly capture the dynamics of contact offer the potential to reduce estimation errors from unplanned contact events and improve closed-loop control performance. This is particularly true in highly dynamic situations where common simplifications like no-slip or quasi-static sliding are violated. Incorporating contact constraints requires care to address the numerical challenges associated with discontinuous dynamics, which make straightforward application of derivative-based techniques such as the Extended Kalman Filter impossible. In this paper, we derive an approximate maximum a posteriori estimator that can handle rigid body contact by explicitly imposing contact constraints in the observation update. We compare the performance of this estimator to an existing state-of-the-art Unscented Kalman Filter designed for estimation through contact and demonstrate the scalability of the approach by estimating the state of a 20-DOF bipedal robot in realtime.
Planning locomotion trajectories for legged microrobots is challenging. This is because of their complex morphology, high frequency passive dynamics, and discontinuous contact interactions with their environment. Consequently, such research is often driven by time-consuming experimental methods. As an alternative, we present a framework for systematically modeling, planning, and controlling legged microrobots. We develop a three- dimensional dynamic model of a 1.5 g quadrupedal microrobot with complexity (e.g., number of degrees of freedom) similar to larger-scale legged robots. We then adapt a recently developed variational contact-implicit trajectory optimization method to generate feasible whole-body locomotion plans for this microrobot, and demonstrate that these plans can be tracked with simple joint-space controllers. We plan and execute periodic gaits at multiple stride frequencies and on various surfaces. These gaits achieve high per-cycle velocities, including a maximum of 10.87 mm/cycle, which is 15% faster than previously measured for this microrobot. Furthermore, we plan and execute a vertical jump of 9.96 mm, which is 78% of the microrobot’s center-of- mass height. To the best of our knowledge, this is the first end-to-end demonstration of planning and tracking whole-body dynamic locomotion on a millimeter-scale legged microrobot.
Wearable robotic devices have been shown to substantially reduce the energy expenditure of human walking. However, response variance between participants for fixed control strategies can be high, leading to the hypothesis that individualized controllers could further improve walking economy. Recent studies on human-in-the-loop (HIL) control optimization have elucidated several practical challenges, such as long experimental protocols and low signal-to-noise ratios. Here, we used Bayesian optimization—an algorithm well suited to optimizing noisy performance signals with very limited data—to identify the peak and offset timing of hip extension assistance that minimizes the energy expenditure of walking with a textile-based wearable device. Optimal peak and offset timing were found over an average of 21.4 ± 1.0 min and reduced metabolic cost by 17.4 ± 3.2% compared with walking without the device (mean ± SEM), which represents an improvement of more than 60% on metabolic reduction compared with state-of-the-art devices that only assist hip extension. In addition, our results provide evidence for participant-specific metabolic distributions with respect to peak and offset timing and metabolic landscapes, lending support to the hypothesis that individualized control strategies can offer substantial benefits over fixed control strategies. These results also suggest that this method could have practical impact on improving the performance of wearable robotic devices.
Many critical robotics applications require robustness to disturbances arising from unplanned forces, state uncertainty, and model errors. Motion planning algorithms that explicitly reason about robustness require a coupling of trajectory optimization and feedback design, where the system's closed-loop response to disturbances is optimized. Due to the often-heavy computational demands of solving such problems, the practical application of robust trajectory optimization in robotics has so far been limited. Motivated by recent work on sums-of-squares verification methods for nonlinear systems, we derive a scalable robust trajectory optimization algorithm that optimizes approximate invariant funnels along the trajectory while planning. For the case of ellipsoidal disturbance sets and LQR feedback controllers, the state and input deviations along a nominal trajectory can be computed locally in closed form, permitting fast evaluation of robust cost and constraint functions and their derivatives. The resulting algorithm is a scalable extension of classical direct transcription that demonstrably improves tracking performance over non-robust formulations while incurring only a modest increase in computational cost. We evaluate the algorithm in several simulated robot control tasks.
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01).