
Spring 2010 seminars Christopher Brooks, 11 Aug 2010 Last updated: 16 Dec 2011
Spring 2010 seminars
 Bipedal Robotic Walking: Motivating the Study of Hybrid Phenomena

Professor Aaron D. Ames, Texas A&M University, May, 4, 2010.
Time and location: 4:005:00pm (540 Cory)
Bipedal walking provides a quintessential example of complex dynamically stable behavior. Hybrid systemssystems with both continuous and discrete behaviorprovide a modeling paradigm in which to capture the behavior of highly complex systems, resulting in new phenomena not found in continuous and discrete systems. Bipedal walking robots are naturally modeled as hybrid systems, and it is the confluence of these two areasbipedal walking and hybrid systemsthat affords a unique opportunity to further the understanding of each of these areas. The applications of these ideas are far reaching, resulting in new research directions in hybrid systems and in a new understanding of the fundamental mechanisms underlying walking.
In this talk, I will use hybrid systems to model complex 3D bipedal robots, obtain walking in these robots through a combination of novel control laws, and use bipedal robots to study phenomena unique to hybrid systems: Zeno behavior. I will begin by discussing how hybrid systems naturally model a wide range of mechanical systems undergoing impacts, including various types of robotic walkers. I will show how Zeno behavior naturally occurs in systems of this form, give conditions for its existence and use these conditions to show that bipedal walking exists even in the presence of Zeno behavior. I will then consider a complex 3D bipedal robot with a hybrid model motivated by human walking, and show how the hybrid nature of this system allows for the design of control laws that result in stable walking. These are achieved by using geometric reduction to decouple the sagittal and coronal dynamics of the walkermuch as humans doto reduce the problem to a lower dimensional setting. Simulation results will be presented and the robotic walking will be compared to real human walking data. Finally, possible applications of these ideas to the design of prosthetic devices will be discussed.
 Toward a Theory of Secure Networked Control Systems

Saurabh Amin, UC Berkeley, April 27, 2010.
Time and location: 4:005:00pm (540 Cory)
The conceptual shift of sensor rich systems from passive
information gathering viewpoint to a networked control viewpoint has
resulted in new challenges in ensuring their secure and dependable
operation. These challenges will require the development of resilient
control techniques as well as introduction of novel security and trust
principles. In this talk, we will discuss the main system theoretic
principles for securing networkedcontrol systems and report progress in
(1) Threat assessment, (2) Attack detection and recovery, and (3)
Resilience under attacks. We discuss models of denialofservice (DoS) and
deception attacks on control systems, and present control synthesis
methods that are robust against these attacks. We present results from
security analysis of two example control systems: the Gignac water
management system located in Southern France, and the TennesseeEastman
process control system. For the Gignac water management system, we use the
knowledge of approximate system dynamics and associated regulatory
controller design to analyze the impact of deception attacks on the
performance of the closedloop system. We test the proposed attack scheme
in simulation and show that the attack is indeed realizable in practice by
implementing it on the physical canal system. We also discuss the
theoretical limits of stabilizability and detectability of deception
attacks for the shallow water model of water flow dynamics in canals. For
the TennesseeEastman process control system, we discuss the use of
statistical anomaly detection methods in detecting deception attacks and
also present an analysis of stealthy attacks. Such attacks can be carried
out by an adaptive adversary who tries to evade the anomaly detection
scheme. We will end the talk by presenting similar results for state
estimation schemes in electric power systems.
 Saving Energy by Reducing Traffic Flow Instabilities

Berthold K. Horn, MIT, April 26, 2010.
Time and location: 4:005:00pm (540 Cory)
Perlane throughput of highways is limited by the appearance of
traffic flow instabilities at high densities. Instabilities greatly
increase the average travel time and increase fuel consumption, not to
mention fray nerves and increase wear and tear on equipment. Many
different discrete and continuous models are able to "explain" the
instabilities, but few have been successful in suggesting methods for
reducing the instabilities. What is needed is a way of reducing the
amplitude of perturbations of all frequencies without violating
existing physical constraints. A counterintuitive control scheme can
do just that, and there is a convincing physical analog to this.
Machine vision may be the cheapest way to provide the
required distance and velocity information, since high quality camera
chips are now less than $1 a piece, while competing radar and sonar
equipment cost orders of magnitude more
 Planning and Learning in Information Space

Nicholas Roy, MIT, April 20, 2010.
Time and location: 4:005:00pm (540 Cory)
Decision making with imperfect knowledge is an essential capability
for unmanned vehicles operating in populated, dynamic domains. For
example, a UAV flying autonomously indoors will not be able to rely on
GPS for position estimation, but instead use onboard sensors to track
its position and map the obstacles in its environment. The planned
trajectories for such a vehicle must therefore incorporate sensor
limitations to avoid collisions and to ensure accurate state
estimation for stable flight  that is, the planner must be be able
to predict and avoid uncertainty in the state, in the dynamics and in
the model of the world. Incorporating uncertainty requires planning in
information space, which leads to substantial computational cost but
allows our unmanned vehicles to plan deliberate sensing actions that
can not only improve the state estimate, but even improve the
vehicle's model of the world and how people interact with the vehicle.
I will discuss recent results from my group in planning in information
space; our algorithms allow robots to generate plans that are robust
to state and model uncertainty, while planning to learn more about the
world. I will describe the navigation system for a quadrotor
helicopter flying autonomously without GPS using laser rangefinding,
and will show how these results extend to autonomous mapping, general
tasks with imperfect information, and humanrobot interaction.
 SILVER: Synthesis Using Integrated Learning and Verification

Susmit Jha, University of California Berkeley, April 08, 2010.
Time and location: 4:305:30pm (540 Cory)
We present a novel approach to automatic synthesis based on a combination of algorithmic learning from examples, and standard verification techniques. Unlike other compilerlike synthesis approaches that require complete specification, our technique is applicable to domains where specification is only available partially or as an interactive oracle. We have applied our synthesis approach to automated program understanding, synthesis of bitmanipulation programs and design of discrete controllers for cyberphysical systems. In this talk, we focus on automatic synthesis of discrete controllers. We show how our technique can be used to synthesize controllers that satisfy safety requirements. Our technique can also be used by designers as an interactive aid to synthesize controllers with optimal performance.
 Advances in Embedded Software Synthesis from Dataflow Models

Soheil Ghiasi, University of California Davis, April 06, 2010.
Time and location: 45pm (540 Cory)
Multiprocessor systems are going to find a larger share in the embedded application space, due to a number of economical and technical imperatives. Productive development of applications for multiprocessors, however, remains to be challenging. In this context, I present our results on synthesis of streaming applications from dataflow models targeting such parallel platforms. I present a provablyeffective algorithm for task assignment –a key step in the compilation flow to two parallel processors, when the limited memory resources and application throughput form the outstanding constraints and objective, respectively. The technique is versatile in that it is formally extensible to a number of different formulations. In addition, I present our results on code size optimization, via enhanced buffer allocation during software synthesis from models, to deal with processors’ limited memory resources.
 A ModelBased EndtoEnd Tool Chain for the Probabilistic Analysis of Complex Systems

Alessandro Pinto, United Technologies Research Center
, March 30, 2010.
Time and location: 45pm (540 Cory)
We present a modelbased environment for the probabilistic analysis of
systems operating under uncertain conditions. This uncertainty may
arise from either the environment in which a system operates or the
platforms on which it executes. Our integrated tool, called StoNES
(Stochastic analysis of Networked Embedded Systems), automates the
model transformation and probabilistic analysis of systems. The input
specification is a combination of models: the functional specification
is captured in Simulink/Stateflow, while the architecture is modeled
using AADL. StoNES translates this description into a parametric
probabilistic model. The probabilistic model can be analyzed using
standard techniques for continuous and discrete time Markov Chains. We
apply our translation and analysis methodology to explore the
tradeoff between sensor accuracy and computational speed for the
vision algorithm of an autonomous helicopter system.
 An Algebra of Pareto Points

Marc Geilen, Eindhoven University of Technology, March 16, 2010.
Time and location: 45pm (540 Cory)
Multicriteria optimization problems occur naturally in a wide range of engineering practices. Pareto analysis has proven to be a powerful tool to characterize potentially interesting realizations of engineering problems and it is used frequently for designspace exploration. Depending on
the ultimate optimization goals, one of the Paretooptimal alternatives is the optimal realization. It often happens that in a design process that partial design decisions have to be taken, leaving other aspects of the optimization problem to be decided at a later stage, and that Paretooptimal solutions for a system have to be composed (dynamically) from Paretooptimal configurations of components. To support this process we have introduced a novel, algebraic mathematical framework to reason about multiobjective tradeoffs. This talk introduces the algebraic framework and illustrates it with some applications in the domains of wireless multimedia, wireless sensor networks and runtime management of embedded systems.
 Robust Uncertainty Management Methods Applied to UAV Search and Tracking Problems

Andrzej Banaszuk, United Technologies Research Center, March 09, 2010.
Time and location: 45pm (540 Cory)
The objective of DARPA DSO Robust Uncertainty Management (RUM) project was to develop methodology and tools for quantifying uncertainty in ways that are orders of magnitude faster than Monte Carlo with nearlinear scaling in the system size, and demonstrate them in molecular dynamics and UAV search challenge problems. This research was conducted by team including UTRC, UCSB, Caltech, Stanford, Yale, Georgia Tech, Princeton, Aimdyn Inc., and PlainSight Inc.
Uncertainty Quantification. Several methods including Polynomial Chaos, and new Stochastic Response Surface, and Dynamic Sampling methods allowed the team to calculate the mean and variance of the phase transition temperature in molecular dynamics calculations with 10K atoms 2000 times faster than using Monte Carlo sampling. We also proposed an iterative UQ approach that exploits the weak subsystem interactions to overcome the dimensionality curse and radically accelerate uncertainty quantification in large systems.
UAV Search and Tracking Algorithms. The new search strategies achieved about 2× reduction in median search time compared with smart lawnmower in a challenge problem including 50 simulated UAVs searching for a stationary target in a complex terrain using noisy sensors with uncertain footprint. One method uses a trajectory planner based on the use of a library of precomputed elementary UAV trajectory segments that individually satisfy the vehicle dynamics and can be interlocked to produce large scale roadmaps. We will also discuss recent advances in a problem of tracking of multiple mobile targets in an adversarial urban environment.
 Embedded Convex Optimization

Jacob Mattingley, Stanford University, March 02, 2010.
Time and location: 45pm (540 Cory)
Convex optimization is widely used in signal processing, automatic control,
networking, communications, machine learning and finance, among other fields.
Even though convex optimization problems are particularly amenable to rapid,
reliable solution, they are mostly only used in offline applications, rather
than in embedded systems, or in systems with realtime constraints. That is
starting to change, thanks to improved computer performance and better
algorithms. One recent advance that makes convex optimization easier to use is
automatic code generation. Here, a computer generates a fast, problemspecific
solver for later online use, from a high level description. In this talk I will
explain these ideas and give some examples, focusing in particular on automatic
code generation, and how it can be used to develop solvers fast enough for use
in realtime and embedded applications.
Joint work with Stephen Boyd and Yang Wang
 Software Failure Avoidance Using Discrete Control Theory

Terence Kelly & Yin Wang, HP Labs, February 23, 2010.
Time and location: 45pm (540 Cory)
Software reliability is an increasingly pressing concern as the multicore revolution forces parallel programming upon the average programmer. Many existing approaches to software failure are ad hoc, based on bestpractice heuristics. Often these approaches impose onerous burdens on developers, entail high runtime performance overheads, or offer no help for unmodified legacy code. We demonstrate that Discrete Control Theory can address software failure avoidance problems.
This talk employs the above methodology in two different application domains: failure avoidance in information technology automation workflows and deadlock avoidance in multithreaded C programs. In the first application, we model workflows using finitestate automata and synthesize controllers for safety and nonblocking specifications expressed as regular languages using an automatabased Discrete Control technique, called Supervisory Control. The second application addresses the problem of deadlock avoidance in multithreaded C programs that use lock primitives. We exploit compiler technology to model programs as Petri nets and establish a correspondence between deadlock avoidance in the program and the absence of reachable empty siphons in its Petri net model. The technique of Supervision Based on Place Invariants is then used to synthesize the desired control logic, which is implemented using sourcetosource translation.
 Simulation and Hybrid Control of Robotic Marionettes

Professor Todd Murphey, Northwestern University, February 16, 2010.
Time and location: 45pm (540 Cory)
Marionettes are physically complex mechanisms characterized by
significant nonlinearity, mechanical degeneracy, and many degrees of
freedom. Nevertheless, puppeteers successfully control marionettes to
imitate human motion and collaborate to create plays where marionettes
physically interact. Choreography for marionette plays can be
interpreted as a type of motion description language that helps
manage the complexity of the dynamic system while still allowing one
to successfully control it. Our approach to controlling marionettes
is to mimic puppeteers by translating choreography into motor commands
using a combination of simulation and hybrid optimal control
techniques tailored for high dimensional nonlinear systems. I will
additionally discuss some of the challenges in building a robotic
marionette platform and discuss our collaboration with Disney to
create reliable, automated marionettes. I will end the talk with a
discussion of how we have applied techniques developed for the
marionettes to a variety of applications, including hybrid observers
for skidsteering and biomechanical simulation and control analysis of
the human hand.
 A NaturalLanguageBased Approach to System Modeling—From the Use of Numbers to the Use of Word

Professor Lotfi A. Zadeh, UC Berkeley, February 09, 2010.
Time and location: 45pm (540 Cory)
Science deals not with reality but with models of reality. In large measure, scientific progress is driven by a quest for better models of reality. In science and engineering it is traditional to describe a model, M(S), of a system, S, in a mathematical language in which the objects of computation are numbers, functions, relations, equations, etc. The principal modeling languages are the language of differential equations, the language of difference equations, the language of probability theory, the language of finitestate machines and combinations of such languages. The predominant modeling language is the language of differential equations. Natural languages are rich but they are not employed as modeling languages because they lack precision.
The methodology of Computing with Words (CW or CWW) opens the door to an effective new approach to system modeling—an approach which is based on the use of natural languages. A key concept which makes this possible is the concept of precisiation. More specifically, in CW a natural language proposition, p, is precisiated through conversion of p into a computationready proposition, p*. p* may be viewed as a model of p. In general, p* has the form of a generalized assignment statement, or, equivalently, the form of a generalized constraint. Unlike conventional constraints, generalized constraints have elasticity. The principal constraints are possibilistic, probabilistic and veristic. A precisiated natural language, PNL, serves four principal purposes: modeling, problemsolving, communication with a machine and definition of imprecise concepts.
The point of departure in CW is a question, q, of the form: What is the value of a variable, X? q is associated with a questionrelevant information set, I, an association expressed as X is I, meaning that the answer to q, Ans(q/I), is to be deduced (computed) from I. Typically, I consists of a collection of natural language propositions, p1, ..., pn, which individually or collectively are carriers of information about the value of X.
There are two principal rationales for the use of a natural language as a modeling language, Rationale A and Rationale B. Rationale A. Words are less precise than numbers. Use precisiated words when numbers are not known. Rationale B. Precision carries a cost. If there is a tolerance for imprecision, exploit it by using precisiated words in place of numbers. Through the use of words, a system described by a collection of differential equations is summarized as a collection of linguistic ifthen rules in which the antecedent and consequent are generalized constraints. Rationale B has a position of centrality in most applications of fuzzy logic, especially in the realm of control, systems and consumer products. In combination, Rationales A and B are of direct relevance to the conception and design of complex systems in which uncertainty and imprecision play prominent roles.
 On Relational Interfaces

Stavros Tripakis, UC Berkeley, February 02, 2010.
Time and location: 45pm (540 Cory)
Compositional methods, that allow to assemble smaller components
into larger systems both efficiently and correctly, are not simply
a desirable feature in system design: they are a must for designing
large and complex systems. It is unsurprising then that a very large
body of research has tackled compositionality in the past. Interface
theories represent one such body of research. Interfaces capture
essential properties of a component. Interface theories offer theorems
that guarantee substitutability, namely, the ability to replace a
component with another one without compromising the properties of
the overall system. In this talk I will present a theory of relational
interfaces, which extends current theories, by allowing to capture
relations between the inputs and outputs of a component.
The theory supports synchronous interfaces, both stateless and stateful.
It includes explicit notions of environments and pluggability, and
offers fundamental properties such as preservation of refinement by
composition, and characterization of pluggability by refinement.
These properties are achieved by making reasonable restrictions on
feedback loops in interface compositions.
 Modeling, Planning, and Control for RobotAssisted Medical Interventions

Allison Okamura, Johns Hokins, January 28, 2010.
Time and location: 34pm (540 Cory)
Many medical interventions today are qualitatively and quantitatively limited by human physical and cognitive capabilities. Along with advances in imaging and visualization, a novel partnership between clinicians and robotic devices can overcome major technical challenges in fields such as surgery, interventional radiology, and rehabilitation. This talk will discuss several robotassisted intervention techniques that will extend humans' ability to carry out interventions more accurately and less invasively. I will focus primarily on the development of minimally invasive systems that delivery therapy by steering needles through deformable tissue and around internal obstacles to precisely reach specified targets. This research builds on emerging methods in robotics, imaging, and mechanics in order to optimize needle design, perform needle motion planning, and enable imageguided intraoperative needle control. In addition, I will review recent results in haptic feedback for robotassisted teleoperated surgery and robotassisted compensation for cerebellar ataxia. All of these systems incorporate one or more key elements of robotic interventions: (1) quantitative descriptions of patient state, (2) the use of models to plan interventions, (3) the design of devices and control systems that connect information to physical action, and (4) the incorporation of human input in a natural way.
 Residential Load Management using Autonomous Auctions

William Burke, UC Berkeley, January 19, 2010.
Time and location: 45pm (540 Cory)
The goal of this research is to develop robust residential load management techniques using advanced controls and intelligence. Residential load management aims to manipulate the electricity demand of a group of consumers to match some desired load shape. Effective load management needs two types of control: systemic and local. Local control determines how each individual power consumer reacts to the signals, and I present a price responsive intelligent thermostat for use in load management. The thermostat uses lowfrequency PWM for HVAC system actuation which drastically simplifies control over power consumption. Systemic control manipulates the aggregate power by signaling the consumers, and my approach makes use of game theory to auction the electricity to the consumers. The Soft Budget Constrained Mechanism is fast, communication efficient, and truthful.
 From HighLevel ComponentBased Models to Distributed Implementations

Borzoo Bonakdarpour, Verimag Laboratory, January 12, 2010.
Time and location: 45pm (540 Cory)
Design and implementation of distributed algorithms often involve many subtleties due to their complex structure, nondeterminism, and low atomicity as well as occurrence of unanticipated physical events such as faults. Thus, constructing correct distributed systems has always been a challenge and often subject to serious errors. We propose a method for generating distributed implementations from highlevel componentbased models that only employ simple synchronization primitives. The method is a sequence of three transformations preserving observational equivalence: (1) A transformation from a global state to a partial state model, (2) a transformation which replaces multiparty strong synchronization primitives in atomic components by pointtopoint send/receive primitives based on asynchronous message passing, and (3) a final transformation to concrete distributed implementation based on platform and architecture. These transformations may use additional mechanisms ensuring mutual exclusion between conflicting interactions. We propose different distributed implementations from fully centralized to fully decentralized ones. We study the properties of different transformations, in particular, performance criteria such as degree of parallelism and overhead for coordination. In our case studies, we observe that different types of distributed and parallel algorithms may significantly behave differently in terms of performance under different types of transformations.
