# Publications

2017
Z. R. Manchester, J. I. Lipton, R. J. Wood, and S. Kuindersma, “A Variable Forward-Sweep Wing Design for Improved Perching in Micro Aerial Vehicles,” in AIAA SciTech Forum, 2017.Abstract

A micro aerial vehicle with a variable forward-sweep wing is proposed with the goal of enhancing performance and controllability during high-angle-of-attack perching maneuvers. Data is presented from a series of wind tunnel experiments to quantify the aerodynamic effects of forward sweep over a range of angles of attack from -25 degrees to +75 degrees. A nonlinear dynamics model is constructed using the wind tunnel data to gain further insight into aircraft flight dynamics and controllability. Simulated perching trajectories optimized with a direct collocation method indicate that the forward-swept wing configuration can achieve qualitatively different lower-cost perching maneuvers than the straight wing configuration.

2016
Z. Manchester and S. Kuindersma, “Derivative-Free Trajectory Optimization with Unscented Dynamic Programming,” in Proceedings of the 55th Conference on Decision and Control (CDC), 2016.Abstract

Trajectory optimization algorithms are a core technology behind many modern nonlinear control applications. However, with increasing system complexity, the computation of dynamics derivatives during optimization creates a computational bottleneck, particularly in second-order methods. In this paper, we present a modification of the classical Differential Dynamic Programming (DDP) algorithm that eliminates the computation of dynamics derivatives while maintaining similar convergence properties. Rather than relying on naive finite difference calculations, we propose a deterministic sampling scheme inspired by the Unscented Kalman Filter that propagates a quadratic approximation of the cost-to-go function through the nonlinear dynamics at each time step. Our algorithm takes larger steps than Iterative LQR---a DDP variant that approximates the cost-to-go Hessian using only first derivatives---while maintaining the same computational cost. We present results demonstrating its numerical performance in simulated balancing and aerobatic flight experiments.

P. Marion, et al., “Director: A User Interface Designed for Robot Operation with Shared Autonomy,” Journal of Field Robotics, 2016.Abstract

Operating a high degree of freedom mobile manipulator, such as a humanoid, in a field scenario requires constant situational awareness, capable perception modules, and effective mechanisms for interactive motion planning and control. A well-designed operator interface
presents the operator with enough context to quickly carry out a mission and the flexibility to handle unforeseen operating scenarios robustly. By contrast, an unintuitive user interface can increase the risk of catastrophic operator error by overwhelming the user with unnecessary information. With these principles in mind, we present the philosophy and design decisions behind Director---the open-source user interface developed by Team MIT to pilot the Atlas robot in the DARPA Robotics Challenge (DRC). At the heart of Director is an integrated task execution system that specifies sequences of actions needed to achieve a substantive task, such as drilling a wall or climbing a staircase. These task sequences, developed a priori, make online queries to automated perception and planning algorithms with outputs that can be reviewed by the operator and executed by our whole-body controller. Our use of Director at the DRC resulted in efficient high-level task operation while being fully competitive with approaches focusing on teleoperation by highly-trained operators. We discuss the primary interface elements that comprise the Director and provide analysis of its successful use at the DRC.

P. - B. Wieber, R. Tedrake, and S. Kuindersma, “Modeling and Control of Legged Systems,” in Springer Handbook of Robotics, 2nd Ed, B. Siciliano and O. Khatib, Ed. Springer, 2016.Abstract

The promise of legged robots over standard wheeled robots is to provide improved mobility over rough terrain. This promise builds on the decoupling between the environment and the main body of the robot that the presence of articulated legs allows, with two consequences. First, the motion of the main body of the robot can be made largely independent from the roughness of the terrain, within the kinematic limits of the legs: legs provide an active suspension system. Indeed, one of the most advanced hexapod robots of the 1980s was aptly called the Adaptive Suspension Vehicle. Second, this decoupling allows legs to temporarily leave their contact with the ground: isolated footholds on a discontinuous terrain can be overcome, allowing to visit places absolutely out of reach otherwise. Note that having feet firmly planted on the ground is not mandatory here: skating is an equally interesting option, although rarely approached so far in robotics.

Unfortunately, this promise comes at the cost of a hindering increase in complexity. It is only with the unveiling of the Honda P2 humanoid robot in 1996, and later of the Boston Dynamics BigDog quadruped robot in 2005 that legged robots finally began to deliver real-life capacities that are just beginning to match the long sought animal-like mobility over rough terrain. In fact, work in legged robotics has even contributed to the understanding of human and animal locomotion, as evidenced by the many fruitful collaborations between robotics and biomechanics researchers over legged locomotion.

M. Posa, S. Kuindersma, and R. Tedrake, “Optimization and stabilization of trajectories for constrained dynamical systems,” in Proceedings of the International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016.Abstract

Contact constraints, such as those between a foot and the ground or a hand and an object, are inherent in many robotic tasks. These constraints define a manifold of feasible states; while well understood mathematically, they pose numerical challenges to many algorithms for planning and controlling whole-body dynamic motions. In this paper, we present an approach to the synthesis and stabilization of complex trajectories for both fully-actuated and underactuated robots subject to contact constraints. We introduce a trajectory optimization algorithm (DIRCON) that extends the direct collocation method, naturally incorporating manifold constraints to produce a nominal trajectory with third-order integration accuracy–-a critical feature for achieving reliable tracking control. We adapt the classical time-varying linear quadratic regulator to produce a local cost-to-go in the manifold tangent plane. Finally, we descend the cost-to-go using a quadratic program that incorporates unilateral friction and torque constraints. This approach is demonstrated on three complex walking and climbing locomotion examples in simulation.

S. Kuindersma, et al., “Optimization-based locomotion planning, estimation, and control design for Atlas,” Autonomous Robots, vol. 40, no. 3, pp. 429–455, 2016.Abstract

This paper describes a collection of optimization algorithms for achieving dynamic planning, control, and state estimation for a bipedal robot designed to operate reliably in complex environments. To make challenging locomotion tasks tractable, we describe several novel applications of convex, mixed-integer, and sparse nonlinear optimization to problems ranging from footstep placement to whole-body planning and control. We also present a state estimator formulation that, when combined with our walking controller, permits highly precise execution of extended walking plans over non-flat terrain. We describe our complete system integration and experiments carried out on Atlas, a full-size hydraulic humanoid robot built by Boston Dynamics, Inc.

2015
M. Fallon, et al., “An Architecture for Online Affordance-based Perception and Whole-body Planning,” Journal of Field Robotics, vol. 32, no. 2, pp. 229–254, 2015.Abstract

The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot's sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation, and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule.

R. Tedrake, S. Kuindersma, R. Deits, and K. Miura, “A closed-form solution for real-time ZMP gait generation and feedback stabilization,” in IEEE-RAS International Conference on Humanoid Robots, Seoul, Korea, 2015.Abstract

Here we present a closed-form solution to the continuous time-varying linear quadratic regulator (LQR) problem for the zero-moment point (ZMP) tracking controller. This generalizes previous analytical solutions for gait generation by allowing soft" tracking (with a quadratic cost) of the desired ZMP, and by providing the feedback gains for the resulting time-varying optimal controller. This enables extremely fast computation, with the number of operations linear in the number of spline segments representing the desired ZMP. Results are presented using the Atlas humanoid robot where dynamic walking is achieved by recomputing the optimal controller online.

2014
S. Kuindersma, F. Permenter, and R. Tedrake, “An Efficiently Solvable Quadratic Program for Stabilizing Dynamic Locomotion,” in Proceedings of the International Conference on Robotics and Automation (ICRA), Hong Kong, China, 2014, pp. 2589–2594.Abstract

We describe a whole-body dynamic walking controller implemented as a convex quadratic program. The controller solves an optimal control problem using an approximate value function derived from a simple walking model while respecting the dynamic, input, and contact constraints of the full robot dynamics. By exploiting sparsity and temporal structure in the optimization with a custom active-set algorithm, we surpass the performance of the best available off-the-shelf solvers and achieve 1kHz control rates for a 34-DOF humanoid. We describe applications to balancing and walking tasks using the simulated Atlas robot in the DARPA Virtual Robotics Challenge.

R. Tedrake, et al., “A summary of team MIT's approach to the virtual robotics challenge,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on, 2014, pp. 2087–2087.Abstract

The paper describes the system developed by researchers from MIT for the Defense Advanced Research Projects Agency's (DARPA) Virtual Robotics Challenge (VRC), held in June 2013. The VRC was the first competition in the DARPA Robotics Challenge (DRC), a program that aims to develop ground robotic capabilities to execute complex tasks in dangerous, degraded, human-engineered environments''. The VRC required teams to guide a model of Boston Dynamics' humanoid robot, Atlas, through driving, walking, and manipulation tasks in simulation. Team MIT's user interface, the Viewer, provided the operator with a unified representation of all available information. A 3D rendering of the robot depicted its most recently estimated body state with respect to the surrounding environment, represented by point clouds and texture-mapped meshes as sensed by on-board LIDAR and fused over time.

2013
G. Konidaris, S. Kuindersma, S. Niekum, R. Grupen, and A. Barto, “Robot Learning: Some Recent Examples,” in Proceedings of the Sixteenth Yale Workshop on Adaptive and Learning Systems, 2013, pp. 71-76.Abstract

This paper provides a brief overview of three recent contributions to robot learning developed by researchers at the University of Massachusetts Amherst. The first is the use of policy search algorithms that exploit new techniques in nonparameteric heteroscedastic regression to directly model policy-dependent distribution of cost. Experiments demonstrate dynamic stabilization of a mobile manipulator through learning flexible, risk-sensitive policies in very few trials. The second contribution is a novel method for robot learning from unstructured demonstrations that permits intelligent sequencing of primitives to create novel, adaptive behavior. This is demonstrated on a furniture assembly task using the PR2 mobile manipulator. The third contribution is a robot system that autonomously acquires skills through interaction with its environment.

S. Kuindersma, R. Grupen, and A. Barto, “Variable Risk Control via Stochastic Optimization,” International Journal of Robotics Research, vol. 32, no. 7, pp. 806–825, 2013.Abstract

We present new global and local policy search algorithms suitable for problems with policy-dependent cost variance (or risk), a property present in many robot control tasks. These algorithms exploit new techniques in nonparameteric heteroscedastic regression to directly model the policy-dependent distribution of cost. For local search, the learned cost model can be used as a critic for performing risk-sensitive gradient descent. Alternatively, decision-theoretic criteria can be applied to globally select policies to balance exploration and exploitation in a principled way, or to perform greedy minimization with respect to various risk-sensitive criteria. This separation of learning and policy selection permits variable risk control, where risk sensitivity can be flexibly adjusted and appropriate policies can be selected at runtime without relearning. We describe experiments in dynamic stabilization and manipulation with a mobile manipulator that demonstrate learning of flexible, risk-sensitive policies in very few trials.

2012
G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto, “Robot learning from demonstration by constructing skill trees,” The International Journal of Robotics Research, vol. 31, no. 3, pp. 360–375, 2012.Abstract

We describe CST, an online algorithm for constructing skill trees from demonstration trajectories. CST segments a demonstration trajectory into a chain of component skills, where each skill has a goal and is assigned a suitable abstraction from an abstraction library. These properties permit skills to be improved eciently using a policy learning algorithm. Chains from multiple demonstration trajectories are merged into a skill tree. We show that CST can be used to acquire skills from human demonstration in a dynamic continuous domain, and from both expert demonstration and learned control sequences on the uBot-5 mobile manipulator.

S. Kuindersma, R. Grupen, and A. Barto, “Variable Risk Dynamic Mobile Manipulation,” in RSS 2012 Workshop on Mobile Manipulation, Sydney, Australia, 2012.Abstract

The ability to operate effectively in a variety of contexts will be a critical attribute of deployed mobile manipulators. In general, a variety of properties, such as battery charge, workspace constraints, and the presence of dangerous obstacles, will determine the suitability of particular control policies. Some context changes will cause shifts in risk sensitivity, or tendency to seek or avoid policies with high performance variation. We describe a policy search algorithm designed to address the problem of variable risk control. We generalize the simple stochastic gradient descent update to the risk-sensitive case, and show that, under certain conditions, it leads to an unbiased estimate of the gradient of the risk-sensitive objective. We show that the local critic structure used in the update can be exploited to interweave offline and online search to select local greedy policies or quickly change risk sensitivity. We evaluate the algorithm in experiments with a dynamically stable mobile manipulator lifting a heavy liquid-filled bottle while balancing.

S. Kuindersma, R. Grupen, and A. Barto, “Variational Bayesian Optimization for Runtime Risk-Sensitive Control,” in Robotics: Science and Systems VIII (RSS), Sydney, Australia, 2012, pp. 201–206.Abstract

We present a new Bayesian policy search algorithm suitable for problems with policy-dependent cost variance, a property present in many robot control tasks. We extend recent work on variational heteroscedastic Gaussian processes to the optimization case to achieve efficient minimization of very noisy cost signals. In contrast to most policy search algorithms, our method explicitly models the cost variance in regions of low expected cost and permits runtime adjustment of risk sensitivity without relearning. Our experiments with artificial systems and a real mobile manipulator demonstrate that flexible risk-sensitive policies can be learned in very few trials.

2011
G. D. Konidaris, S. R. Kuindersma, R. A. Grupen, and A. G. Barto, “Acquiring Transferrable Mobile Manipulation Skills,” in RSS 2011 Workshop on Mobile Manipulation: Learning to Manipulate, Los Angeles, CA, 2011.Abstract

This abstract summarizes recent research on the autonomous acquisition of transferrable manipulation skills. We describe a robot system that learns to sequence a set of innate controllers to solve a task, and then extracts transferrable manipulation skills from the resulting solution. Using the extracted skills, the robot is able to significantly reduce the time required to discover the solution to a second task.

G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto, “Autonomous Skill Acquisition on a Mobile Manipulator,” in Proceedings of the Twenty-Fifth Conference on Artificial Intelligence (AAAI-11), San Francisco, CA, 2011, pp. 1468–1473.Abstract

We describe a robot system that autonomously acquires skills through interaction with its environment. The robot learns to sequence the execution of a set of innate controllers to solve a task, extracts and retains components of that solution as portable skills, and then transfers those skills to reduce the time required to learn to solve a second task.

G. D. Konidaris, S. R. Kuindersma, R. A. Grupen, and A. G. Barto, “CST: Constructing Skill Trees by Demonstration,” in Proceedings of the ICML Workshop on New Developments in Imitation Learning, Bellevue, WA, 2011.Abstract

We describe recent work on CST, an online algorithm for constructing skill trees from demonstration trajectories. CST segments a demonstration trajectory into a chain of component skills, where each skill has a goal and is assigned a suitable abstraction from an abstraction library. These properties per- mit skills to be improved eciently using a policy learning algorithm. Chains from mul- tiple demonstration trajectories are merged into a skill tree. We describe applications of CST to acquiring skills from human demon- stration in a dynamic continuous domain and from both expert demonstration and learned control sequences on a mobile manipulator.

S. Kuindersma, R. Grupen, and A. Barto, “Learning Dynamic Arm Motions for Postural Recovery,” in Proceedings of the 11th IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 2011, pp. 7–12.Abstract

The biomechanics community has recently made progress toward understanding the role of rapid arm movements in human stability recovery. However, comparatively little work has been done exploring this type of control in humanoid robots. We provide a summary of recent insights into the functional contributions of arm recovery motions in humans and experimentally demonstrate advantages of this behavior on a dynamically stable mobile manipulator. Using Bayesian optimization, the robot efficiently discovers policies that reduce total energy expenditure and recovery footprint, and increase ability to stabilize after large impacts.

2010
S. Kuindersma, “Control Model Learning for Whole-Body Mobile Manipulation,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2010.