Forum
Previous topic  |  This topic  |  Next topic
Collapse

Presentations from Previous Semesters

Previous Presentations

Fall 2009 Seminars

Fall 2009 seminars

Extending Ptolemy to Support Software Model Variant and Configuration Management
Charles Shelton, Robert Bosch Research and Technology Center, December 15, 2009.
Time and location: 4-5pm (540 Cory)

Robert Bosch is a worldwide company that supplies components to every major automobile manufacturer. Bosch provides engine control systems, braking and steering systems, transmission control systems, and many other embedded software control systems for many different car models from many different manufacturers. To manage this complexity, Bosch uses product line architectures for most of its software systems. These software systems are first designed using a modeling tool called ASCET which uses an actor-oriented modeling paradigm similar to Ptolemy. Because of the multitude of combinations of models and manufacturers, variant and configuration management for all of these software models is a significant issue. Within the Bosch Research and Technology Center (RTC) we are using Ptolemy to explore potential solutions for managing this complexity. We will present an overview of past, present, and future challenges, including the work we have done in collaboration with the UC Berkeley Ptolemy research group. In addition, we are currently working with a student team in the Carnegie Mellon Master of Software Engineering (MSE) program to develop extensions to Ptolemy that provide support for storing and retrieving models from a relational database. The goal of this work is to build a basic infrastructure to support product line configuration and management in Ptolemy.

Active Traffic Management using Aurora Road Network Modeler
Alex A. Kurzhanskiy, University of California at Berkeley, December 08, 2009.
Time and location: 4-5pm (540 Cory)

Active Traffic Management (ATM) is the ability to dynamically manage recurrent and nonrecurrent congestion based on prevailing traffic conditions. Focusing on trip reliability, it maximizes the effectiveness and efficiency of freeway corridors. ATM relies on fast and trustworthy traffic simulation that can help assess a large number of control strategies for a road network, given various scenarios, in a matter of minutes. Effective traffic density estimation is crucial for the successful deployment of feedback congestion control algorithms. Aurora Road Network Modeler (RNM) is an open-source macro-simulation tool set for operational planning and management of freeway corridors, developed at Berkeley within the TOPL project. Aurora RNM employs Cell Transmission Model (CTM) for road networks extended to support multiple vehicle classes and signalized arterials. It allows dynamic filtering of measurement data coming from traffic sensors for the estimation of traffic density. In this capacity, it can be used for detection of faulty sensors. The virtual sensor infrastructure of Aurora RNM serves as an interface to the real world measurement devices, as well as a simulation of such measurement devices. In the seminar we shall also talk about the current situation in the traffic management in California, and discuss what can and needs to be done for its improvement. Aurora RNM homepage, TOPL project homepage

Distributed Coverage Control with On-Line Learning
Mac Schwager, Computer Science and Artificial Intelligence Lab, MIT, December 1, 2009.
Time and location: 4-5pm (540 Cory)

One of the fundamental problems of multi-robot control is how to deploy a group of robots optimally over an environment using only local sensory and communication information. This problem is called coverage control, and it is integral to distributed sensing, surveillance, data collection, and servicing tasks with multiple robots. In this talk I will present a unified design and analysis strategy for coverage control. Controllers are derived from the gradient of a general cost function, which can be specialized to represent a variety of coverage scenarios including probablistic scenarios, geometric Voronoi-based approaches, and artificial potential field-based approaches. I will also discuss coverage controllers that incorporate on-line learning using stable parameter adaptation with rigorous stability and performance guarantees. I will show that learning performance can be enhanced in a multi-robot context by using a consensus algorithm to propagate information throughout the robot network. Results from experiments with groups of ground and aerial robots will be presented.

Mobile Floating Sensor Network Placement using the Saint-Venant 1D Equation
Andrew Tinka, University of California Berkeley, November 24, 2009.
Time and location: 4-5pm (540 Cory)

Sensor packages that move within the system they measure pose new challenges for researchers. Sensors which float freely in a fluid environment, known in the hydrodynamics community as "Lagrangian drifters", are a growing tool for emergency response, environmental monitoring and hydrodynamic modeling. The placement problem requires finding a release policy that leads to a favorable spatial configuration of drifters within the domain of interest. The Saint-Venant equations, or Shallow Water equations, are a partial differential equation model of fluid flow that are well suited to rivers, estuaries, and other domains where drifters may be deployed. Approaching the Saint-Venant equations with different assumptions leads to varying degrees of computational complexity and model simplifications. Linearizations about uniform or non-uniform flows, which are of practical interest in the channel control community, are presented. Numerical simulation results will be presented, as well as preliminary results from a study conducted at the Hydraulic Research Unit in Stillwater, Oklahoma as part of the Department of Homeland Security's Rapid Repair of Levee Breaches program. The Berkeley floating sensor platform, which contains an embedded computer, GPS receiver, and a GSM cell phone module, will be presented.

High-Level Tasks to Correct Low-Level Robot Control
Hadas Kress-Gazit, Cornell University, November 17, 2009.
Time and location: 4-5pm (540 Cory)

Robots today can mop the floor, assist surgeons and explore space; however, there is no robot that could be trusted to drive autonomously in a real city. Robots either perform simple or hard-coded tasks fully autonomously or they operate with close human supervision. While most of the sensing and actuation technology required for high-level operation exists, what is lacking is the ability to plan at a high-level while providing guarantees for safety and correctness of a robot's autonomous behavior. In this talk I will present a formal approach to creating robot controllers that ensure the robot satisfies a given high level task. I will describe a framework in which a user specifies a complex and reactive task in Structured English. This task is then automatically translated, using temporal logic and tools from the formal methods world, into a hybrid controller. This controller is guaranteed to control the robot such that its motion and actions satisfy the intended task, in a variety of different environments.

Mobile Device Insights - Virtualization and Device Interoperability
Jörg Brakensiek, Nokia Research Center, November 10, 2009.
Time and location: 4-5pm (540 Cory)

Mobile Devices are becoming more powerful and computer-like devices. In my talk I will highlight two aspects of this developments, which I have recently worked on in my research. Virtualization is a well known technology in the server and desktop domain. Increasing openness of mobile device platforms, combined with strong requirements on security and content protection are paving the way to mobile virtualization technologies. In the talk, I will highlight the main challenges and promising approaches for mobile virtualization. Secondly, mobile devices are our companion anywhere at anytime, providing seamless access to content and services. But there are circumstances, where the use of mobile devices is limited or even prohibited, like driving in a car. In such cases the mobile device would benefit from functionality provided from the car. During the talk I will discuss the different aspects of this interoperability challenge and describe the chosen approach. The talk will include a demonstration of our latest research results, on how mobile device can effectively utilize car display and control elements to fully remote control the mobile device.

Understanding the Genome of Data Centers
Jie Liu, Microsoft Research, November 03, 2009.
Time and location: 4-5pm (540 Cory)

To meet the growing demand of Online Services, data centers consume billions of KWh electricity every year, and the number is expected to double in the next 5 years. A typical data center is operated conservatively and as a result almost half of the energy consumption goes into cooling, and power distribution, and idled servers. In this talk, I summarize a number of research efforts in the Data Center Genome project, which takes a data-driven approach to data center energy management, leveraging networked sensing and control technologies. We have designed and deployed wireless environmental sensors to monitor heat distribution in server rooms and software-based services to estimate server power consumption. We tackled the challenges of reliable wireless data collection, time series compression and analysis, and static/dynamic resource management. The findings are used to advance the way equipments are provisioned, loads are distributed, and systems are operated in datacenters.

Robust Control via Sign Definite Decomposition
Shankar P. Bhattacharyya, Department of Electrical Engineering, Texas A&M University, October 29, 2009.
Time and location: 4-5pm (400 Cory)

A sign definite decomposition technique is described that allows one to determine the positivity of a polynomial function over a hyperbox. This is used to develop an extension of Kharitonov's Theorem to determine the Hurwitz stability of a polynomial with coefficients that depend polynomially on interval parameters. This result can be used in an iterative and modular fashion to construct stabilizing sets for the fixed order multivariable control design problem. The result is illustrated by design examples.

Mathematical Equations as Executable Models of Mechanical Systems
Walid Taha, Rice University, October 22, 2009.
Time and location: 4-5pm (540 Cory)

Increasingly, hardware and software systems are being developed for applications where they must interact directly with a physical environment. This trend significantly complicates testing computational systems and necessitates modeling and simulation of physical environments. While numerous tools provide differing types of assistance in simulating physical systems, there are surprising gaps in the support that these tools provide. Focusing on mechanics as an important class of physical environment, we address two such gaps, namely, the poor integration between different existing tools and the performance limitations of mainstream symbolic computing engines. We combine our solutions to these problems to create a domain-specific language that embodies a novel approach to modeling mechanical systems that is natural to engineers. The new language, called Acumen, enables describing mechanical systems as mathematical equations. These equations are transformed by a fast, well-defined sequence of steps into executable code. Key design features of Acumen include support for acausal modeling, point-free (or implicit time) notation, efficient symbolic differentiation, and families of equations. Our experience suggests that Acumen provides a promising example of balancing the needs of the engineering problem solving process and the available computational methods.

Data-Driven Modeling of Dynamical Systems: Optimal Excitation Signals and Structured Systems
Bo Wahlberg, Automatic Control Lab and ACCESS, KTH, Stockholm, Sweden , October 13, 2009.
Time and location: 4-5pm (540 Cory)

The quality of an estimated model should be related to the specifications of the intended application. A classical approach is to study the �size� of the asymptotic covariance matrix (the inverse of the Fisher information matrix) of the corresponding parameter vector estimate. In many cases it is possible to design and implement external excitation signals, e.g. pilot signals in communications systems or input signals in control applications. The objective of this seminar is to present some recent advances in optimal experiment design for system identification with a certain application in mind. The idea is to minimize experimental costs (e.g. the energy of the excitation signal), while guarantying that the estimated model with a certain probability satisfies the specifications of the application. This will result in a convex optimization problem, where the optimal solution should reveal system properties important for the application while hiding irrelevant dynamics. A simple Finite Impulse Response (FIR) example will be used to illustrate the basic ideas. We will also study how this approach can be used for L_2 gain estimation, motivated by a stability analysis application using the small gain theorem. The second part of the seminar will consider how structural information can be used improve the quality of the estimated model, but also some fundamental limitations related to identification of structured models. We will focus on a simple FIR cascade system example to highlight some fundamental issues.This seminar is based on joint work with Håkan Hjalmarsson, KTH.

Simulating Print Service Provider Using Ptolemy II
Jun Zeng, Hewlett-Packard Laboratories, October 6, 2009.
Time and location: 4-5pm (540 Cory)

The digital transformation is having a profound impact on commercial print. It creates new print demands made possible by "every-page-is-different"; it holds the promise of enabling more efficient, dynamic, reconfigurable workflow; and it has inspired new business practice such as print-on-demand. Modeling and simulation can help to understand the potential of digital print in a quantitative way, and help to exploit the digital opportunities at both strategic and operational level. In this presentation we will describe our ongoing work of applying Ptolemy II to print factory simulation, example preliminary results, and path forward.

Robust Distributed Task Planning for Networked Agents
Han-Lim Choi, Massachusetts Institute of Technology, September 29, 2009.
Time and location: 4-5pm (540 Cory)

This talk discusses methodologies to perform robust distributed task planning for a heterogeneous team of agents performing cooperative missions such as coordinated search, acquisition, and track missions. We present the consensus-based bundle algorithm (CBBA) which is a decentralized cooperative iterative auction algorithm for assigning tasks to agents. CBBA uses two phases to achieve a conflict-free task assignment. The first phase consists of each agent generating a single ordered bundle of tasks by greedily selecting tasks. The second phase then resolves inconsistent or conflicting assignments with the objective of improving the global reward through a bidding process. A key feature of CBBA is that its consensus protocol aims at agreement on the winning bids and corresponding winning agents (i.e., consensus in the spaces of decision variables and objective function). This enables CBBA to create conflict-free solutions that are relatively robust to inconsistencies in the current situational awareness. Recent research has extended CBBA to handle more realistic multi-UAV operational complications such as logical couplings in missions, heterogeneity of teams, uncertainty in a dynamic environment, and obstacle regions in flight space. We also present experimental results on the RAVEN flight test facility.

User-generated 3-D Content: Personalizing Immersive Connected Experiences
Yimin Zhang, Intel Labs China, September 25, 2009.
Time and location: 11am-noon (540 Cory)

ICE (Immersive Connected Experiences) represents a future trend of bringing richness of visual computing to future internet applications (such as social networking, virtual world etc.). In this talk, Intel's ICE vision and research directions will be introduced briefly. Then a more in-depth introduction to the latest research results on user-generated 3-D content technology based on advanced computer vision algorithms developed at Intel Labs China will be given, including 3D object modeling, 3D avatar modeling/controlling and 3D mirror world navigation/generation. We hope this talk will give the audience overall understanding of the future trend of ICE, and what kinds of usages are enabled by latest research results, and what are the further research challenges we are facing.

The Relation of Spike Timing to Large-Scale LFP Patterns
Ryan Canolty, University of California, Berkeley, September 22, 2009.
Time and location: 4-5pm (540 Cory)

Two key questions drive the research discussed in this talk. The first targets computation in a local cortical area: how does a population of interconnected neurons coordinate their spiking activity when engaged in a particular functional operation? The second question is focused on long-range communication: how do widely-distributed brain regions rapidly form the transient functional networks needed to support complex perception, cognition, and action? One hypothesis is that a hierarchy of neuronal oscillations plays a key role in this coordination and multi-scale integration. The BMI paradigm, employing massively parallel microelectrode recordings from multiple brain areas over several months, is ideal for addressing these fundamental questions. Today�s talk highlights ongoing work investigating the relation of single neuron spike timing to the pattern of local field potentials (LFPs) in both local and distant cortical areas, and discusses how this spike/LFP mutual information may be used to predict spike timing and possibly improve BMI performance.

Concurrency and Scalability versus Fragmentation and Compaction with Compact-fit
Hannes Payer, University of Salzburg, September 15, 2009.
Time and location: 4-5pm (540 Cory)

We study, formally and experimentally, the trade-off in temporal and spatial performance when managing contiguous pieces of memory using the real-time memory management system Compact-fit (CF). The key property of CF is that temporal and spatial performance can be bounded, related, and predicted in constant time through the notion of partial and incremental compaction. Partial compaction determines the maximally tolerated degree of memory fragmentation. Incremental compaction, introduced here, determines the maximal amount of memory involved in any, logically atomic portion of a compaction operation. We evaluate experimentally different CF configurations and show which configurations scale under high memory allocation and deallocation load on multiprocessors and memory-constrained uniprocessor systems.

Avoiding Unbounded Priority Inversion in Barrier Protocols Using Gang Priority Management
Harald Roeck, University of Salzburg, September 15, 2009.
Time and location: 4-5pm (540 Cory)

Large real-time software systems such as real-time Java virtual machines often use barrier protocols, which work for a dynamically varying number of threads without using centralized locking. Such barrier protocols, however, still suffer from priority inversion similar to centralized locking. We introduce gang priority management as a generic solution for avoiding unbounded priority inversion in barrier protocols. Our approach is either kernel-assisted (for efficiency) or library-based (for portability) but involves cooperation from the protocol designer (for generality). We implemented gang priority management in the Linux kernel and rewrote the garbage collection safe-point barrier protocol in IBM�s WebSphere Real Time Java Virtual Machine to exploit it. We run experiments on an 8-way SMP machine in a multi-user and multi-process environment, and show that by avoiding unbounded priority inversion, the maximum latency to reach a barrier point is reduced by a factor of 5.3 and the application jitter is reduced by a factor of 1.5.

The Design, Modeling, and Control of Mechatronic Systems with Emphasis on Betterment of Quality of Human Life
Kyoungchul Kong, University of California, Berkeley, September 8, 2009.
Time and location: 4-5pm (540 Cory)

Mechatronic technologies are steadily penetrating in our daily lives. We are surrounded by mechatronic products and interact with them in many ways. In particular, mechatronic devices may potentially improve the quality of life of elderly people and patients with impairments. In this talk, several key technologies for effectively assisting humans are introduced. These technologies include 1) sensing technologies for identifying the intent of humans, 2) decision making algorithms to decide about the right amount of assistance, 3) actuation technologies to provide precise assistive forces, and 4) control algorithms for effectively assisting humans. As a sensing unit for assistance and rehabilitation, a sensor-embedded shoe that measures the ground contact forces is introduced in this presentation. Signal processing algorithms for the rehabilitation of patients with gait disorders are discussed as well. The design of controllers for assistive and rehabilitation systems is challenging due to the human factor in the control loop. In this talk, control algorithms are designed based on inspiration from a fictitious gain in the feedback loop in the human body and from aquatic therapy. The two proposed methods are effective for the purposes of assistance and rehabilitation, respectively. A robust control approach is applied in the design of the controllers for improved robustness to uncertainties in the human body properties

Large Monitoring Systems: Data Analysis, Design and Deployment
Ram Rajagopal, University of California, Berkeley, September 3, 2009.
Time and location: 4-5pm (400 Cory)

The emergence of pervasive sensing, high bandwidth communications and inexpensive data storage and computation systems makes it possible to drastically change how we design, monitor and regulate very large-scale physical and human networks. Performance gains in the way we operate these networks translate into large savings. There are many critical challenges to create functional monitoring systems, such as data reliability, computational efficiency and proper system design, including choices of sensors, communication protocols, and analysis approaches. In this talk I present a framework to design a monitoring system, deploy it, maintain it and process the incoming heterogeneous sources of information, resulting in new applications. The framework is being applied to urban traffic monitoring and road infrastructure sensing. We have developed various state of the art statistical algorithms, computed performance guarantees and studied some of the fundamental limits of the proposed ideas. I illustrate the methodology using experimental deployments we have built and that are currently in use. I also present some open questions and directions for future research.

Reasoning about Online Algorithms with Weighted Automata
Orna Kupferman, Hebrew University, August 25, 2009.
Time and location: 4-5pm (540 Cory)

We describe an automata-theoretic approach for the competitive analysis of online algorithms. Our approach is based on weighted automata, which assign to each input word a positive cost in R. By relating the ``unbounded look ahead'' of optimal offline algorithms with nondeterminism, and relating the ``no look ahead'' of online algorithms with determinism, we are able to solve problems about the competitive ratio of online algorithms, and the memory they require, by reducing them to questions about determinization and approximated determinization of weighted automata. Joint work with Benjamin Aminof and Robby Lampert.

Spring 2010 seminars

Spring 2010 seminars

Bipedal Robotic Walking: Motivating the Study of Hybrid Phenomena
Professor Aaron D. Ames, Texas A&M University, May, 4, 2010.
Time and location: 4:00-5:00pm (540 Cory)

Bipedal walking provides a quintessential example of complex dynamically stable behavior. Hybrid systems---systems with both continuous and discrete behavior---provide a modeling paradigm in which to capture the behavior of highly complex systems, resulting in new phenomena not found in continuous and discrete systems. Bipedal walking robots are naturally modeled as hybrid systems, and it is the confluence of these two areas---bipedal walking and hybrid systems---that affords a unique opportunity to further the understanding of each of these areas. The applications of these ideas are far reaching, resulting in new research directions in hybrid systems and in a new understanding of the fundamental mechanisms underlying walking. In this talk, I will use hybrid systems to model complex 3D bipedal robots, obtain walking in these robots through a combination of novel control laws, and use bipedal robots to study phenomena unique to hybrid systems: Zeno behavior. I will begin by discussing how hybrid systems naturally model a wide range of mechanical systems undergoing impacts, including various types of robotic walkers. I will show how Zeno behavior naturally occurs in systems of this form, give conditions for its existence and use these conditions to show that bipedal walking exists even in the presence of Zeno behavior. I will then consider a complex 3D bipedal robot with a hybrid model motivated by human walking, and show how the hybrid nature of this system allows for the design of control laws that result in stable walking. These are achieved by using geometric reduction to decouple the sagittal and coronal dynamics of the walker---much as humans do---to reduce the problem to a lower dimensional setting. Simulation results will be presented and the robotic walking will be compared to real human walking data. Finally, possible applications of these ideas to the design of prosthetic devices will be discussed.

Toward a Theory of Secure Networked Control Systems
Saurabh Amin, UC Berkeley, April 27, 2010.
Time and location: 4:00-5:00pm (540 Cory)

The conceptual shift of sensor rich systems from passive information gathering viewpoint to a networked control viewpoint has resulted in new challenges in ensuring their secure and dependable operation. These challenges will require the development of resilient control techniques as well as introduction of novel security and trust principles. In this talk, we will discuss the main system theoretic principles for securing networked-control systems and report progress in (1) Threat assessment, (2) Attack detection and recovery, and (3) Resilience under attacks. We discuss models of denial-of-service (DoS) and deception attacks on control systems, and present control synthesis methods that are robust against these attacks. We present results from security analysis of two example control systems: the Gignac water management system located in Southern France, and the Tennessee-Eastman process control system. For the Gignac water management system, we use the knowledge of approximate system dynamics and associated regulatory controller design to analyze the impact of deception attacks on the performance of the closed-loop system. We test the proposed attack scheme in simulation and show that the attack is indeed realizable in practice by implementing it on the physical canal system. We also discuss the theoretical limits of stabilizability and detectability of deception attacks for the shallow water model of water flow dynamics in canals. For the Tennessee-Eastman process control system, we discuss the use of statistical anomaly detection methods in detecting deception attacks and also present an analysis of stealthy attacks. Such attacks can be carried out by an adaptive adversary who tries to evade the anomaly detection scheme. We will end the talk by presenting similar results for state estimation schemes in electric power systems.

Saving Energy by Reducing Traffic Flow Instabilities
Berthold K. Horn, MIT, April 26, 2010.
Time and location: 4:00-5:00pm (540 Cory)

Per-lane throughput of highways is limited by the appearance of traffic flow instabilities at high densities. Instabilities greatly increase the average travel time and increase fuel consumption, not to mention fray nerves and increase wear and tear on equipment. Many different discrete and continuous models are able to "explain" the instabilities, but few have been successful in suggesting methods for reducing the instabilities. What is needed is a way of reducing the amplitude of perturbations of all frequencies without violating existing physical constraints. A counter-intuitive control scheme can do just that, and there is a convincing physical analog to this. Machine vision may be the cheapest way to provide the required distance and velocity information, since high quality camera chips are now less than $1 a piece, while competing radar and sonar equipment cost orders of magnitude more

Planning and Learning in Information Space
Nicholas Roy, MIT, April 20, 2010.
Time and location: 4:00-5:00pm (540 Cory)

Decision making with imperfect knowledge is an essential capability for unmanned vehicles operating in populated, dynamic domains. For example, a UAV flying autonomously indoors will not be able to rely on GPS for position estimation, but instead use on-board sensors to track its position and map the obstacles in its environment. The planned trajectories for such a vehicle must therefore incorporate sensor limitations to avoid collisions and to ensure accurate state estimation for stable flight -- that is, the planner must be be able to predict and avoid uncertainty in the state, in the dynamics and in the model of the world. Incorporating uncertainty requires planning in information space, which leads to substantial computational cost but allows our unmanned vehicles to plan deliberate sensing actions that can not only improve the state estimate, but even improve the vehicle's model of the world and how people interact with the vehicle. I will discuss recent results from my group in planning in information space; our algorithms allow robots to generate plans that are robust to state and model uncertainty, while planning to learn more about the world. I will describe the navigation system for a quadrotor helicopter flying autonomously without GPS using laser range-finding, and will show how these results extend to autonomous mapping, general tasks with imperfect information, and human-robot interaction.

SILVER: Synthesis Using Integrated Learning and Verification
Susmit Jha, University of California Berkeley, April 08, 2010.
Time and location: 4:30-5:30pm (540 Cory)

We present a novel approach to automatic synthesis based on a combination of algorithmic learning from examples, and standard verification techniques. Unlike other compiler-like synthesis approaches that require complete specification, our technique is applicable to domains where specification is only available partially or as an interactive oracle. We have applied our synthesis approach to automated program understanding, synthesis of bit-manipulation programs and design of discrete controllers for cyber-physical systems. In this talk, we focus on automatic synthesis of discrete controllers. We show how our technique can be used to synthesize controllers that satisfy safety requirements. Our technique can also be used by designers as an interactive aid to synthesize controllers with optimal performance.

Advances in Embedded Software Synthesis from Dataflow Models
Soheil Ghiasi, University of California Davis, April 06, 2010.
Time and location: 4-5pm (540 Cory)

Multiprocessor systems are going to find a larger share in the embedded application space, due to a number of economical and technical imperatives. Productive development of applications for multiprocessors, however, remains to be challenging. In this context, I present our results on synthesis of streaming applications from dataflow models targeting such parallel platforms. I present a provably-effective algorithm for task assignment –a key step in the compilation flow- to two parallel processors, when the limited memory resources and application throughput form the outstanding constraints and objective, respectively. The technique is versatile in that it is formally extensible to a number of different formulations. In addition, I present our results on code size optimization, via enhanced buffer allocation during software synthesis from models, to deal with processors’ limited memory resources.

A Model-Based End-to-End Tool Chain for the Probabilistic Analysis of Complex Systems
Alessandro Pinto, United Technologies Research Center , March 30, 2010.
Time and location: 4-5pm (540 Cory)

We present a model-based environment for the probabilistic analysis of systems operating under uncertain conditions. This uncertainty may arise from either the environment in which a system operates or the platforms on which it executes. Our integrated tool, called StoNES (Stochastic analysis of Networked Embedded Systems), automates the model transformation and probabilistic analysis of systems. The input specification is a combination of models: the functional specification is captured in Simulink/Stateflow, while the architecture is modeled using AADL. StoNES translates this description into a parametric probabilistic model. The probabilistic model can be analyzed using standard techniques for continuous and discrete time Markov Chains. We apply our translation and analysis methodology to explore the trade-off between sensor accuracy and computational speed for the vision algorithm of an autonomous helicopter system.

An Algebra of Pareto Points
Marc Geilen, Eindhoven University of Technology, March 16, 2010.
Time and location: 4-5pm (540 Cory)

Multi-criteria optimization problems occur naturally in a wide range of engineering practices. Pareto analysis has proven to be a powerful tool to characterize potentially interesting realizations of engineering problems and it is used frequently for design-space exploration. Depending on the ultimate optimization goals, one of the Pareto-optimal alternatives is the optimal realization. It often happens that in a design process that partial design decisions have to be taken, leaving other aspects of the optimization problem to be decided at a later stage, and that Pareto-optimal solutions for a system have to be composed (dynamically) from Pareto-optimal configurations of components. To support this process we have introduced a novel, algebraic mathematical framework to reason about multi-objective trade-offs. This talk introduces the algebraic framework and illustrates it with some applications in the domains of wireless multi-media, wireless sensor networks and run-time management of embedded systems.

Robust Uncertainty Management Methods Applied to UAV Search and Tracking Problems
Andrzej Banaszuk, United Technologies Research Center, March 09, 2010.
Time and location: 4-5pm (540 Cory)

The objective of DARPA DSO Robust Uncertainty Management (RUM) project was to develop methodology and tools for quantifying uncertainty in ways that are orders of magnitude faster than Monte Carlo with near-linear scaling in the system size, and demonstrate them in molecular dynamics and UAV search challenge problems. This research was conducted by team including UTRC, UCSB, Caltech, Stanford, Yale, Georgia Tech, Princeton, Aimdyn Inc., and PlainSight Inc. Uncertainty Quantification. Several methods including Polynomial Chaos, and new Stochastic Response Surface, and Dynamic Sampling methods allowed the team to calculate the mean and variance of the phase transition temperature in molecular dynamics calculations with 10K atoms 2000 times faster than using Monte Carlo sampling. We also proposed an iterative UQ approach that exploits the weak subsystem interactions to overcome the dimensionality curse and radically accelerate uncertainty quantification in large systems. UAV Search and Tracking Algorithms. The new search strategies achieved about 2× reduction in median search time compared with smart lawnmower in a challenge problem including 50 simulated UAVs searching for a stationary target in a complex terrain using noisy sensors with uncertain footprint. One method uses a trajectory planner based on the use of a library of pre-computed elementary UAV trajectory segments that individually satisfy the vehicle dynamics and can be interlocked to produce large scale roadmaps. We will also discuss recent advances in a problem of tracking of multiple mobile targets in an adversarial urban environment.

Embedded Convex Optimization
Jacob Mattingley, Stanford University, March 02, 2010.
Time and location: 4-5pm (540 Cory)

Convex optimization is widely used in signal processing, automatic control, networking, communications, machine learning and finance, among other fields. Even though convex optimization problems are particularly amenable to rapid, reliable solution, they are mostly only used in offline applications, rather than in embedded systems, or in systems with real-time constraints. That is starting to change, thanks to improved computer performance and better algorithms. One recent advance that makes convex optimization easier to use is automatic code generation. Here, a computer generates a fast, problem-specific solver for later online use, from a high level description. In this talk I will explain these ideas and give some examples, focusing in particular on automatic code generation, and how it can be used to develop solvers fast enough for use in real-time and embedded applications. Joint work with Stephen Boyd and Yang Wang

Software Failure Avoidance Using Discrete Control Theory
Terence Kelly & Yin Wang, HP Labs, February 23, 2010.
Time and location: 4-5pm (540 Cory)

Software reliability is an increasingly pressing concern as the multicore revolution forces parallel programming upon the average programmer. Many existing approaches to software failure are ad hoc, based on best-practice heuristics. Often these approaches impose onerous burdens on developers, entail high runtime performance overheads, or offer no help for unmodified legacy code. We demonstrate that Discrete Control Theory can address software failure avoidance problems. This talk employs the above methodology in two different application domains: failure avoidance in information technology automation workflows and deadlock avoidance in multithreaded C programs. In the first application, we model workflows using finite-state automata and synthesize controllers for safety and nonblocking specifications expressed as regular languages using an automata-based Discrete Control technique, called Supervisory Control. The second application addresses the problem of deadlock avoidance in multithreaded C programs that use lock primitives. We exploit compiler technology to model programs as Petri nets and establish a correspondence between deadlock avoidance in the program and the absence of reachable empty siphons in its Petri net model. The technique of Supervision Based on Place Invariants is then used to synthesize the desired control logic, which is implemented using source-to-source translation.

Simulation and Hybrid Control of Robotic Marionettes
Professor Todd Murphey, Northwestern University, February 16, 2010.
Time and location: 4-5pm (540 Cory)

Marionettes are physically complex mechanisms characterized by significant nonlinearity, mechanical degeneracy, and many degrees of freedom. Nevertheless, puppeteers successfully control marionettes to imitate human motion and collaborate to create plays where marionettes physically interact. Choreography for marionette plays can be interpreted as a type of motion description language that helps manage the complexity of the dynamic system while still allowing one to successfully control it. Our approach to controlling marionettes is to mimic puppeteers by translating choreography into motor commands using a combination of simulation and hybrid optimal control techniques tailored for high dimensional nonlinear systems. I will additionally discuss some of the challenges in building a robotic marionette platform and discuss our collaboration with Disney to create reliable, automated marionettes. I will end the talk with a discussion of how we have applied techniques developed for the marionettes to a variety of applications, including hybrid observers for skid-steering and biomechanical simulation and control analysis of the human hand.

A Natural-Language-Based Approach to System Modeling—From the Use of Numbers to the Use of Word
Professor Lotfi A. Zadeh, UC Berkeley, February 09, 2010.
Time and location: 4-5pm (540 Cory)

Science deals not with reality but with models of reality. In large measure, scientific progress is driven by a quest for better models of reality. In science and engineering it is traditional to describe a model, M(S), of a system, S, in a mathematical language in which the objects of computation are numbers, functions, relations, equations, etc. The principal modeling languages are the language of differential equations, the language of difference equations, the language of probability theory, the language of finite-state machines and combinations of such languages. The predominant modeling language is the language of differential equations. Natural languages are rich but they are not employed as modeling languages because they lack precision.

The methodology of Computing with Words (CW or CWW) opens the door to an effective new approach to system modeling—an approach which is based on the use of natural languages. A key concept which makes this possible is the concept of precisiation. More specifically, in CW a natural language proposition, p, is precisiated through conversion of p into a computation-ready proposition, p*. p* may be viewed as a model of p. In general, p* has the form of a generalized assignment statement, or, equivalently, the form of a generalized constraint. Unlike conventional constraints, generalized constraints have elasticity. The principal constraints are possibilistic, probabilistic and veristic. A precisiated natural language, PNL, serves four principal purposes: modeling, problem-solving, communication with a machine and definition of imprecise concepts. The point of departure in CW is a question, q, of the form: What is the value of a variable, X? q is associated with a question-relevant information set, I, an association expressed as X is I, meaning that the answer to q, Ans(q/I), is to be deduced (computed) from I. Typically, I consists of a collection of natural language propositions, p1, ..., pn, which individually or collectively are carriers of information about the value of X.

There are two principal rationales for the use of a natural language as a modeling language, Rationale A and Rationale B. Rationale A. Words are less precise than numbers. Use precisiated words when numbers are not known. Rationale B. Precision carries a cost. If there is a tolerance for imprecision, exploit it by using precisiated words in place of numbers. Through the use of words, a system described by a collection of differential equations is summarized as a collection of linguistic if-then rules in which the antecedent and consequent are generalized constraints. Rationale B has a position of centrality in most applications of fuzzy logic, especially in the realm of control, systems and consumer products. In combination, Rationales A and B are of direct relevance to the conception and design of complex systems in which uncertainty and imprecision play prominent roles.

On Relational Interfaces
Stavros Tripakis, UC Berkeley, February 02, 2010.
Time and location: 4-5pm (540 Cory)

Compositional methods, that allow to assemble smaller components into larger systems both efficiently and correctly, are not simply a desirable feature in system design: they are a must for designing large and complex systems. It is unsurprising then that a very large body of research has tackled compositionality in the past. Interface theories represent one such body of research. Interfaces capture essential properties of a component. Interface theories offer theorems that guarantee substitutability, namely, the ability to replace a component with another one without compromising the properties of the overall system. In this talk I will present a theory of relational interfaces, which extends current theories, by allowing to capture relations between the inputs and outputs of a component. The theory supports synchronous interfaces, both stateless and stateful. It includes explicit notions of environments and pluggability, and offers fundamental properties such as preservation of refinement by composition, and characterization of pluggability by refinement. These properties are achieved by making reasonable restrictions on feedback loops in interface compositions.

Modeling, Planning, and Control for Robot-Assisted Medical Interventions
Allison Okamura, Johns Hokins, January 28, 2010.
Time and location: 3-4pm (540 Cory)

Many medical interventions today are qualitatively and quantitatively limited by human physical and cognitive capabilities. Along with advances in imaging and visualization, a novel partnership between clinicians and robotic devices can overcome major technical challenges in fields such as surgery, interventional radiology, and rehabilitation. This talk will discuss several robot-assisted intervention techniques that will extend humans' ability to carry out interventions more accurately and less invasively. I will focus primarily on the development of minimally invasive systems that delivery therapy by steering needles through deformable tissue and around internal obstacles to precisely reach specified targets. This research builds on emerging methods in robotics, imaging, and mechanics in order to optimize needle design, perform needle motion planning, and enable image-guided intra-operative needle control. In addition, I will review recent results in haptic feedback for robot-assisted teleoperated surgery and robot-assisted compensation for cerebellar ataxia. All of these systems incorporate one or more key elements of robotic interventions: (1) quantitative descriptions of patient state, (2) the use of models to plan interventions, (3) the design of devices and control systems that connect information to physical action, and (4) the incorporation of human input in a natural way.

Residential Load Management using Autonomous Auctions
William Burke, UC Berkeley, January 19, 2010.
Time and location: 4-5pm (540 Cory)

The goal of this research is to develop robust residential load management techniques using advanced controls and intelligence. Residential load management aims to manipulate the electricity demand of a group of consumers to match some desired load shape. Effective load management needs two types of control: systemic and local. Local control determines how each individual power consumer reacts to the signals, and I present a price responsive intelligent thermostat for use in load management. The thermostat uses low-frequency PWM for HVAC system actuation which drastically simplifies control over power consumption. Systemic control manipulates the aggregate power by signaling the consumers, and my approach makes use of game theory to auction the electricity to the consumers. The Soft Budget Constrained Mechanism is fast, communication efficient, and truthful.

From High-Level Component-Based Models to Distributed Implementations
Borzoo Bonakdarpour, Verimag Laboratory, January 12, 2010.
Time and location: 4-5pm (540 Cory)

Design and implementation of distributed algorithms often involve many subtleties due to their complex structure, nondeterminism, and low atomicity as well as occurrence of unanticipated physical events such as faults. Thus, constructing correct distributed systems has always been a challenge and often subject to serious errors. We propose a method for generating distributed implementations from high-level component-based models that only employ simple synchronization primitives. The method is a sequence of three transformations preserving observational equivalence: (1) A transformation from a global state to a partial state model, (2) a transformation which replaces multi-party strong synchronization primitives in atomic components by point-to-point send/receive primitives based on asynchronous message passing, and (3) a final transformation to concrete distributed implementation based on platform and architecture. These transformations may use additional mechanisms ensuring mutual exclusion between conflicting interactions. We propose different distributed implementations from fully centralized to fully decentralized ones. We study the properties of different transformations, in particular, performance criteria such as degree of parallelism and overhead for coordination. In our case studies, we observe that different types of distributed and parallel algorithms may significantly behave differently in terms of performance under different types of transformations.

Fall 2010 Seminars

Fall 2010 seminars

Kris Pister, UC Berkeley, Tuesday, December 14th, 2010
Time and Location: 4-5pm (540 Cory)

Synchronized Wireless Sensor Networks - Applications, Standards, and Technology

This talk will cover some existing commercial applications of synchronized wireless sensor networks, the relevant standards, and the underlying technology for time synchronization. RMS synchronization error across commercial multi-hop mesh networks is in the tens of microseconds, headed to a fraction of a microsecond.


Changbin Yu, Australian National University, Tuesday, December 7th, 2010
Time and Location: 4-5pm (540 Cory)

Graph Rigidity Theory for Formation Control

This talk considers problems of control of multiagent formations that can be modeled by undirected or directed graphs. The graphical model can capture specific design considerations from each of sensing, communication and control architectures, or a mixture of them. The characterization using these formation graphs and the associated control laws thus can be applied in autonomous mobile robotic networks of various types.

Central to this talk is the development and application of graph rigidity theory. Subject to time constraints, a wide range of issues will be covered: from fundamental problems like formation modeling and characterization of formation information structures to task-oriented studies such as motion coordination, formation operations and formation robustness.


Silviu Craciunas, University of Salzburg, Tuesday, November 30th, 2010
Time and Location: 4-5pm (540 Cory)

Programmable Temporal Isolation for Real-Time Systems

We introduce a novel scheduling paradigm for hard real-time uniprocessor systems, called Variable-bandwidth Server (VBS), that enables adaptation of software processes to varying real-time constraints while maintaining temporal isolation. In the VBS process model, processes are sequences of individual actions with an action being a given piece of process code. Processes and actions are temporally isolated from one another in the sense that the variance in response times of the same action of a given process is bounded independently of any other concurrently running processes in the system.

As the variance of action response times is also influenced by the scheduler overhead, we introduce a framework for overhead accounting in VBS-scheduled systems. Overhead introduced by the scheduling algorithm may be accounted for either by decreasing process execution speed to maintain CPU utilization, or by increasing CPU utilization to maintain process execution speed. Both methods can be combined by handling an arbitrary fraction of the total scheduler overhead with one method and the rest with the other.

We also show that it is possible to reduce CPU power consumption with VBS by CPU voltage and frequency scaling. Scaling to lower frequencies is possible whenever there is CPU slack in the system. We propose a frequency-scaling VBS algorithm that exploits CPU slack to minimize operating frequencies with maximal CPU utilization while maintaining temporal isolation. This may lead to improvements in power consumption while hiding the real-time effects of frequency scaling. Additionally, we present more advanced methods with which additional power may be saved by redistributing computation time of individual process actions if the system has knowledge of future events.


Mac Schwager, UPenn/MIT, Thursday, November 18th, 2010
Time and Location: 4-5pm (540 Cory)

Large Scale Surveillance and Aggressive Coordination in Multi-Robot Systems

Groups of robots working collaboratively have the potential to change the way we sense and interact with our environment at large scales. They can be autonomously deployed over ecosystems or cities to collect high-resolution data over large areas, process that data in a decentralized way, and act on the environment to affect a desired large-scale change. In this talk I will describe two recent advances in multi-robot control. First I will describe a scalable, decentralized control algorithm for deploying groups of robotic cameras to collectively monitor an environment. I will present the results of experiments carried out with three quadrotor robots indoors and five quadrotors outdoors using the controller. Next I will discuss time delays in multi-robot networks, and their limiting effect on the aggressiveness of the robots' actions. I will describe an analytical relationship between network update rate, network topology, and control aggressiveness in a formation control scenario. Experimental results with three quadrotor robots probing the limits of control aggressiveness will be presented.


Alix Munier Kordon, Universit���© Pierre et Marie Curie, Paris, Tuesday, November 16th, 2010
Time and Location: 4-5pm (540 Cory)

A new Approach for Minimizing Buffer Capacities with Throughput Constraint for Embedded System Design

The design of streaming (e.g. multimedia or network packet processing) applications must consider several optimizations such as the mimimization of the whole surface of the memory needed on a Chip. The minimum throughput of the output is usually fixed. In this paper, we present an original methodology to solve this problem.

The application is modelled using a Marked Timed Weighted Event Graphs (in short MTWEG), which is a subclass of Petri nets. Transitions correspond to specific treatments and the places model buffers for data transfer. It is assumed that transitions are periodically fired with a fixed throughput. This formalism is equivalent to Synchonous DataFlow Graphs.

The problem is first mathematically modelled using an Integer Linear Program. We then study for a unique buffer the optimum throughput according to the capacity. A first polynomial simple algorithm computing the minimum surface for a fixed throughput is derived when there is no circuit in the initial MTWEG, which corresponds to a wide class of applications. For general MTWEG, the problem is NP-Hard and an original polynomial 2-approximation algorithm is presented. For practical applications, the solution computed is very close to the optimum.

This is joint work with Mohamed Benazouz and Olivier Marchetti.


Ray DeCarlo, Purdue University, Friday, November 12th, 2010
Time and Location: 4-5pm (540 Cory)

Optimal Control of Switched Systems: An Embedding Approach with Application To a WMR and Real-Time Model Predictive Control of a dc-dc Boost Converter

This talk reviews results that show for quite a general class of hybrid optimal control problems containing both controlled and memoryless autonomous switches, the switched (hybrid) optimal control problem can be formulated so that the computational complexity of the problem is no greater than that of smooth optimal control problems. After defining the switched optimal control problem (SOCP), we elucidate its embedding (a relaxation) into a continuously parameterized family of problems defined as the embedded (switched) optimal control problem (EOCP). We argue that the SOCP is best solved by first solving the EOCP. The key idea behind this claim is that the trajectories of a switched system are dense in the trajectories of the embedded system. Sufficient and necessary conditions for solvability of the EOCP are given. The necessary conditions (via a generalized Hamiltonian) can be seen as a generalized Maximum Principle without explicit assumptions on fixing the number or sequence of mode switchings.

Two examples are presented. The first example considers a wheeled mobile robot (WMR) equipped with a rechargeable batters which powers electric drives on each wheel that propel the WMR in one mode of operation or recharge the battery in a second mode, resulting in a total of four modes. The objective is to use MPC for traction control to stabilize the WMR to a predefined set while subject to wheel slippage. The second example applies the embedding theory to the problem of real-time model predictive control of a dc-dc boost converter. After describing the model and performance index, a solution methodology is set forth which on hardware solves the EOCP using an active set method in less than 50 s. The EOCP solution is projected onto an approximating SOCP solution. The switching control of the approximating SOCP solution is applied in real time using center-aligned PWM with a 15.8 KHz clock in the context of a two partition MPC window. Both simulation and experimental results will be presented.


Keshab K. Parhi, University of Minnesota, Minneapolis, Wednesday, November 10th, 2010
Time and Location: 11am-12pm (540 Cory)

Digital Signal Processing with Protein Molecules and DNA Strands

Digital signal processing (DSP) refers to non-terminating computations where computations are carried out repetitively on incoming samples. Digital signal processing has been applied in audio, speech, video, image, and wired and wireless applications. This talk will focus on signal processing of proteins using bio-chemical reactions. Past work on signal processing of chemical reaction networks (CRNs) at Stanford, Berkeley and LBL has provided a foundation for this effort. Our work differs in two respects. First, our work is based on digital signal processing as opposed to analog. Second, this work is based on synthesis as opposed to analysis of CRNs. In this talk, we review simple chemical computations that are one-time computations that terminate. All DSP computations are iterative, and require realization of delay elements. The delay operation refers to transfer operation from one protein to another protein. The delay element can be part of a feed-forward path or a feedback loop. Implementing a delay operation through the chemical reactions is a non-trivial task, and is one of the major contributions of our work. We describe a 3-phase synchronization scheme, referred to as RGB scheme, as a robust method of synchronization in biochemical reactions. The RGB scheme is then used to synthesize biochemical reactions for simple FIR and IIR digital filters. Ordinary differential equations based solutions are presented for the synthezied biochemical reactions. A substrate is then considered for realizing these chemical reactions by DNA strands, based on work at Caltech. Simulation results of the chemical reactions using DNA strands are then presented to prove robustness and effectiveness of the proposed approach. An example is shown to illustrate that the RGB scheme can also be used as a clock in synchronous CRNs. Several limitations are then addressed, and ongoing efforts are described. Applications of the proposed methodology for biosensing, drug delivery, cross-talk cancellation and equalization of protein-protein reactions are then outlined.

This is a joint work with colleague Prof. Marc Riedel, graduate student Hua Jiang, and undergrad student Sasha Karam.


Yu Ru, University of California, San Diego, Tuesday, November 9th, 2010
Time and Location: 4-5pm (540 Cory)

Monitoring and Diagnosis of Complex Systems

In complex systems (e.g., electric grids, computer networks, automobiles), a single faulty behavior could cause cascading failures in a very short period and potentially lead to unpredictable results. In addition, due to complicated interactions between subsystems, monitoring and diagnosis of such systems can be extremely challenging, but they are very important to ensure safety operations. Such systems can be abstracted as discrete event systems (DESs) at a high level. This talk focuses on two related topics: monitoring and diagnosis of a given DES equipped with limited sensors, and configuring a set of sensors of minimal cost in a given DES to achieve accurate monitoring and immediate fault diagnosis. Different methods/results are proposed to address these two problems and the talk concludes by highlighting future research directions.


Karl H. Johansson, KTH, Thursday, November 4th, 2010
Time and Location: 4-5pm (540 Cory)

Wireless Control

There is a growing deployment of wireless networks in industrial control systems. Lower installation costs and easier system reconfigurations for wireless devices can have a major influence on the future application of distributed control and monitoring. There is however a lack of theory for understanding if and how the allocation of communication resources should be integrated with the control application. In this talk, we will discuss how the access scheme for the wireless medium can influence the closed-loop performance of the networked control system. It will be argued that the underlying scheduling-control problem has a non-classical information structure. Appropriate models for medium access control protocols will be introduced. It will be shown how these protocols can be tuned for various wireless control applications. We will also see that by making event-triggered transmissions based on decisions taken locally at the sensor and actuator nodes, it is possible improve the design and to limit the use of the communication resources. The talk will be illustrated by several examples from ongoing projects with Swedish industry. The presentation is based on joint work with collaborators at KTH.


Christoph Kirsch, University of Salzburg, Tuesday, November 2nd, 2010
Time and Location: 4-5pm (540 Cory)

Scal: Non-linearizable Computing Breaks the Scalability Barrier

We propose a relaxed version of linearizability and a set of load balancing algorithms for trading off adherence to concurrent data structure semantics and scalability. We consider data structures that store elements in a given order such as stacks and queues. Intuitively, a concurrent stack, for example, is linearizable if the effect of push and pop operations on the stack always occurs instantaneously. A linearizable stack guarantees that pop operations return the youngest stack elements first, i.e., the elements in the reverse order in which the operations that pushed them onto the stack took effect. Linearizability allows to reorder concurrent (but not sequential) operations arbitrarily. We relax linearizability to k-linearizability with k > 0 to also allow sequences of up to k - 1 sequential operations to be reordered arbitrarily and thus execute concurrently. With a k-linearizable stack, for example, a pop operation may not return the youngest but the k-th youngest element on the stack. It turns out that k-linearizability may be tolerated by concurrent applications such as process schedulers and web servers that already use it implicitly. Moreover, k-linearizability does provide positive scalability in some cases because more operations may be executed concurrently but may still be too restrictive under high contention. We therefore propose a set of load balancing algorithms, which significantly improve scalability by approximating k-linearizability probabilistically. We introduce Scal, an open-source framework for implementing k-linearizable approximations of concurrent data structures, and show in multiple benchmarks that Scal provides positive scalability for concurrent data structures that typically do not scale under high contention.


Hans Boehm, HP Labs, Friday, October 22nd, 2010
Time and Location: 3-4pm (540 Cory)

Programming language memory models: What do shared variables mean?

Although multithreaded programming languages are common, there has been a surprising amount of confusion surrounding the basic meaning of shared variables. There is finally a growing consensus, both that programming languages should by default guarantee an interleaving-based semantics for data-race-free programs, and on what that should mean. We discuss the motivation for, and both implementation and user consequences of, this guarantee.

Unfortunately, it is also increasingly clear that such a guarantee, though useful, is insufficient for languages intended to support sand-boxed execution of untrusted code. The existing solution in Java is only partially satisfactory. The solution to this problem is far less clear. We briefly outline one promising approach.


Pierluigi Nuzzo, UC Berkeley, Tuesday, October 12th, 2010
Time and Location: 4-5pm (540 Cory)

CalCS: Satisfiability Modulo Theory Solving for Non-Linear Convex Constraints

Certain formal verification tasks require reasoning about Boolean combinations of non-linear arithmetic constraints over the real numbers. In this talk, we present a new technique for satisfiability solving of Boolean combinations of non-linear constraints that are convex. Our approach applies fundamental results from the theory of convex programming to realize a satisfiability modulo theory (SMT) solver that uses a lazy combination of SAT and a theory solver. A key step in our algorithm is the use of complementary slackness and duality theory to generate succinct infeasibility proofs that support conflict-driven learning. Moreover, whenever non-convex constraints are produced from Boolean reasoning, we provide a procedure that generates conservative approximations of the original set of constraints by using geometric properties of convex sets and supporting hyperplanes. We have validated our solver, CalCS, on several benchmarks including formulas generated from bounded model checking of hybrid automata and static analysis of floating-point software.


Sanjit A. Seshia, UC Berkeley, Tuesday, October 5th, 2010
Time and Location: 4-5pm (540 Cory)

Integrating Induction, Deduction, and Structure for Verification and Synthesis

Even with impressive advances in formal methods over the last few decades, some problems in automatic verification and synthesis remain challenging. Examples include controller synthesis of hybrid systems with non-linear dynamics, and the verification of quantitative properties of software such as execution time. In this talk, I will present a new approach to automatic verification and synthesis based on a combination of inductive methods (learning from examples) and deductive methods (logical inference and constraint solving) using hypotheses about system structure.

Our approach integrates techniques such as satisfiability solving and theorem proving (SAT/SMT), game-theoretic online learning, learning polyhedra, numerical simulation, and fixpoint computation. I will illustrate this combination of inductive and deductive methods for three problems: (i) the synthesis of switching logic for hybrid systems; (ii) program synthesis applied to malware deobfuscation, and (iii) the verification of quantitative properties of embedded software.


Christopher Mayhew, UC Santa Barbara, Tuesday, September 21st, 2010
Time and Location: 4-5pm (540 Cory)

Hybrid Control for Topologically Constrained System

Many systems in engineering evolve on topologically complex manifolds that preclude the existence of globally asymptotically stabilizing continuous feedback control laws. Moreover, there exists no discontinuous global stabilizer that is robust to measurement noise for such systems. When one is willing to relax the admissible control policy to be hybrid, robust global asymptotic stabilization becomes possible.

In this talk, we present a Lyapunov-based hybrid control strategy that assures global asymptotic stabilization of a wide class of systems, provided that a "synergistic" family of Lyapunov functions is available. While synergistic families may be difficult to come by in general, we present useful findings for mechanical systems having rotational degrees of freedom, including the attitude of a rigid body. Finally, the attitude of a rigid body is often parametrized in terms of unit quaternions, which provide a globally nonsingular, yet two-to-one representation. This creates the need to stabilize a disconnected set of two points in the parametrization space. When such a parametrization is used, it is possible to induce an undesirable unwinding behavior, where the control law forces an unnecessary full rotation of the rigid body. We present a hybrid controller that achieves global asymptotic stability of rigid body attitude that is robust to measurement noise while avoiding unwinding.


Veselin Skendzic, Schweitzer Engineering Laboratories, Thursday, September 16th, 2010
Time and Location: 3:30-4:30pm (540 Cory)

Reinventing Our Energy Future: Smart Grid, Synchrophasors, and the Global Need for Absolute Time

Efficient use of energy is one of the core problems facing our civilization. A typical human can thrive on a 1,200 calorie diet, which is roughly equivalent to the amount of energy (gasoline) that the same person uses for a simple 0.7 mile commute.

Electric power system (arguably the largest machine ever built) has allowed us to transfer the energy at will, with exceptional reliability and high efficiency. It has also allowed us to waste more, and has made us highly dependent on its services, becoming directly linked to the growth of our society. Ensuring our growth has become synonymous with finding new ways to grow and improve the electric power system capabilities. This seminar looks at some of the key drivers behind the �¢ï¿½ï¿½Smart Grid�¢ï¿½ï¿½ initiative and presents several new technologies (global communications, Synchrophasors and absolute time) being developed to operate the new grid.


Reinhard von Hanxleden, Christian-Albrechts-Universit���¤t Kiel, Wednesday, September 15th, 2010
Time and Location: 4-5pm (540 Cory)

Lightweight, Deterministic Concurrency and Preemption in C and Java

The control flow of reactive systems typically entails not just the sequential control flow found in traditional programming languages, such as conditionals and loops, but also exhibits concurrency and preemption. How to express this adequately in an imperative language such as C or Java is a notoriously difficult problem. Threads and associated synchronization primitives are widely used, but entail significant overhead and easily lead to non-determinism and deadlock.

I will present a light-weight approach to embed reactive, deterministic control flow in traditional sequential imperative programming languages. The approach is inspired by coroutines and the synchronous model of computation. A prototypical implementation for C, called Synchronous C (SC), is freely available. An adaptation for Java is currently under development.


Sean Humbert, University of Maryland, Thursday, September 9th, 2010
Time and Location: 4-5pm (540 Cory)

Guidance, Navigation, and Control Challenges of Aerial Microrobotics

Successful realization of insect-based aerial microrobotic vehicles faces significant hurdles due to fast dynamics, small payload capacities, and the absence of a quantitative reduced-order description of the flight mechanics. Moreover, the stringent size, weight, and power (SWaP) constraints render traditional sensing and processing approaches unusable. Insect nervous systems, which function under similar constraints, offer a promising alternative. In these small organisms, spatially distributed arrays of simple sensors that pool localized measurements are processed in parallel by sensory interneurons that converge onto relatively small numbers of motor neurons responsible for controlling locomotion. In addition to this unique approach for rapid extraction of information for stability augmentation and navigation, behavioral analysis and measurements from high speed videography indicate that insects leverage passive aerodynamic mechanisms attendant to flapping motions to further minimize sensing and feedback requirements. This talk will address (a) a control- and information-theoretic framework in which to analyze the advantages of this unique sensorimotor architecture; (b) reduced order models of flapping flight dynamics and analysis of high speed videography of insect gust response to understand the passive aerodynamic stability mechanisms; and (c) hardware implementations of insect-inspired sensors and the development of a 10g flapping micro air vehicle.

Website: http://www.avl.umd.edu/


Christoph Kirsch, University of Salzburg, Tuesday, September 7th, 2010
Time and Location: 4-5pm (540 Cory)

Short-term Memory for Self-collecting Mutators

We propose a new memory model for heap management, called short-term memory, and concurrent implementations of short-term memory for Java and C, called self-collecting mutators. In short-term memory, objects on the heap expire after a finite amount of time, which makes deallocation unnecessary. Self-collecting mutators requires programmer support to control the progress of time and thereby enable reclaiming the memory of expired objects. We informally describe a simple translation scheme for porting existing programs to self-collecting mutators. As shown by our experimental results on several benchmarks, self-collecting mutators performs competitively with garbage-collected and explicitly managed systems. Unlike garbage-collected systems, self-collecting mutators does not introduce pause times and provide constant execution time of all operations, independent of the number of reachable objects, and constant short-term memory consumption after a steady state has been reached. Self-collecting mutators can be linked against any unmodified C code introducing a per-object space overhead of one word and negligible runtime overhead. Short-term memory may then be used to remove explicit deallocation of some but not necessarily all objects.

Joint work with Martin Aigner, Andreas Haas, and Ana Sokolova.

Website: http://tiptoe.cs.uni-salzburg.at/short-term-memory/


Mikael Johansson, KTH - Royal Institute of Technology, Tuesday, August 31st, 2010
Time and Location: 4-5pm (540 Cory)

Reliable Real-Time Sensor Networking for Control

It is well-known that communication delays and packet losses impair the achievable performance of closed-loop control systems. At the same time, it is also well-known that there is an inherent trade-off between communication latency and reliability. In this talk, we will develop scheduling policies for unreliable multi-hop mesh networks that allow to attain the optimal performance of networked control loops. To understand the application requirements of industrial monitoring and control, we first study a simple estimation problem under unreliable communication. This allows us to derive a closed-form expression of how the optimal estimation error variance depends on communication latency and loss. We then shift focus to transmission scheduling and present optimal policies for minimizing the latency of data collection and command dissemination operations. We observe that these latency-optimal policies can be unreliable in the presence of packet losses and that higher reliability can be obtained at the expense of longer delay. To characterize the trade-off between latency and loss, we develop joint routing and scheduling policies that maximize reliability under a hard deadline constraint. The solution to this problem allows us to trace the achievable loss-latency region for any given multi-hop mesh network and, hence, the achievable performance of a control loop closed over the network. Implications of our results on current and emerging standards are discussed and experimental validation results are presented. The talk is based on joint work with students, postdocs and colleagues at KTH and SICS.

Resources

Spring 2011 seminars

Spring 2011 seminars

Didier Dubois, Université Paul Sabatier, Toulouse, Thursday, May 19th, 2011
Time and Location: 4-5pm (540 Cory)

A unified view of uncertainty theories

The notion of uncertainty has been a controversial issue for a long time. In particular the prominence of probability theory in the scientific arena has blurred some distinctions that were present from its inception, namely between uncertainty due to the variability of physical phenomena, and uncertainty due to a lack of information. The Bayesian school claims that whatever its origin, uncertainty can be modeled by single probability distributions. This assumption has been questioned in the last thirty years or so. Indeed the use of unique distributions so as to account for incomplete information leads to paradoxical uses of probability theory. In the area of risk analysis, especially concerning environmental matters, it is crucial to account for variability and incomplete information separately, even if conjointly, in uncertainty propagation techniques. Indeed, it should be clear at the decision level what is the part of uncertainty due to partial ignorance (hence reducible by collecting more information) from uncertainty due to variability (to be faced with concrete actions). New uncertainty theories have emerged, which have the potential to meet this challenge, where the unique distribution is replaced by a convex set of probabilities, this set being all the larger as less information is present. Special cases of such representations, which enable efficient calculation methods, are based on random sets and possibility theory (using fuzzy sets of possible values). The aim of this talk is to trigger interest in these new approaches, by explaining their epistemology and illustrating them on some applications.


Christine Morin, INRIA, Thursday, May 12th, 2011
Time and Location: 4-5pm (540 Cory)

Building Large Scale Dynamic Computing Infrastructures over Clouds

The popularity of virtualization has paved the way for the advent of the cloud computing model. In cloud computing infrastructures, providers make use of virtualization technologies to offer flexible, on-demand provisioning of resources to customers. Combining both public and private infrastructures creates so-called hybrid clouds, allowing companies and institutions to manage their computing infrastructures in flexible ways and to dynamically take advantage of externally provided resources. Considering the growing needs for large computation power and the availability of a growing number of clouds distributed over the Internet and across the globe, our work focuses on two principal objectives: 1) leveraging virtualization and multiple cloud computing infrastructures to build distributed large scale computing platforms, 2) developing mechanisms to make these infrastructures more dynamic – thereby offering new ways to exploit the inherent dynamic nature of distributed clouds. First, we present how we build large scale computing infrastructures by harnessing resources from multiple distributed clouds. Then, we describe the different mechanisms we developed to allow efficient inter-cloud live migration, which is a major building block for taking advantage of dynamicity in distributed clouds.


João Barros, Universidade do Porto, Portugal, Tuesday, May 10th, 2011
Time and Location: 4-5pm (540 Cory)

Vehicular Networks: System Modeling and Real-World Deployment

Vehicle to vehicle (V2V) communication is emerging as the paradigm of choice for a number of traffic safety, traffic management, assisted driving and infotainment services. The design of protocols and applications based on V2V platforms is largely dependent on the availability of realistic models for key system aspects, in particular for the mobility patterns of the vehicles and the characteristics of V2V wireless channels. To meet these challenges, we start by combining large-scale traffic simulation with advanced network analysis in order to characterize fundamental graphtheoretic parameters. A comparison between our results and popular mobility models, such as the random waypoint model and the Manhattan mobility model, reveals striking differences in the obtained connectivity profiles, thus casting some doubt on the adequacy of simple mobility models. Addressing the implications of the wireless channel, we consider the interplay between single-hop channel models and large-scale network connectivity. By progressively increasing the sophistication of the wireless link while evaluating the resulting connectivity profiles, we are able to show that the large-scale node degree behavior of a complex shadow fading environment is well approximated by a simpler and more tractable unit-disk model.

We further observe that existing research in vehicular ad-hoc networks typically disregards the effect of vehicles as physical obstructions to wireless propagation. Using two cars equipped with the new standard V2V communication, Dedicated Short Range Communications (DSRC) or IEEE 802.11p, we performed extensive experimental measurements in order to collect received signal power and packet delivery ratio information in several relevant scenarios: parking lot, open space, highway, suburban and urban canyon. Upon separating the data into line of sight (LOS) and non-line of sight (NLOS) categories, our results show that obstructing vehicles cause significant impact on the channel quality with relevant implications on the design of upper layer protocols. A single obstacle can cause a drop of over 20 dB in received signal strength when two cars communicate at a distance of 10 m. This motivates us to introduce a new V2V model, in which vehicles are accounted for as physical three-dimensional obstacles that affect LOS and induce significant attenuation and packet loss. The algorithm behind the proposed model allows for computationally efficient implementation in VANET simulators, thus adding to the realism of supporting tools for protocol design.

We shall conclude the talk by elaborating on the real-world deployment of a vehicular ad-hoc network with 500 nodes currently under way in the city of Porto, Portugal.

Joint work with Mate Boban, Rui Meireles, Hugo Conceição, Tiago Vinhoza, Michel Ferreira (U. Porto), Ozan Tonguz (CMU), Peter Steenkiste (CMU) and Susana Sargento (U. Aveiro).


Jia Zou, UC Berkeley, Thursday, May 5th, 2011
Time and Location: 4-5pm (540 Cory)

From Ptides to PtidyOS, Programming Distributed Real-Time Systems

Real-time systems are those whose correctness depend not only on logical operations but also on timing delays in response to environment triggers. Thus programs that implement these systems must satisfy constraints on response time. However, most of these systems today are designed using abstractions that do not capture timing properties. For example, a programming language such as C does not provide constructs that specify how long computation takes. Instead, system timing properties are inferred from low-level hardware details. This effectively means conventional programming languages fail as a proper abstraction for real-time systems. To tackle this problem, a programming model called "Ptides" was first introduced by Yang Zhao. Ptides builds on a solid foundation in discrete-event (DE) model of computation. By leveraging the temporal semantics of DE, Ptides captures both the functional and timing aspects of the system. This thesis extends prior work by providing a set of execution strategies that make efficient use of computation resources and guarantees deterministic functional and timing behaviors. A complete design flow based on these strategies is then presented.

Our workflow starts with a programming environment where a distributed real-time application is expressed as a Ptides model. The model captures both the logical operations of the system and the desired timing of interactions with the environment. The Ptides simulator supports simulation of both of these aspects. If execution times are available, this information can be annotated as a part of the model to show whether desired timing can be achieved in that implementation. Once satisfied with the design, a code generator can be used to glue together the application code with a real-time operating system called PtidyOS. To ensure the responsiveness of the real-time program, PtidyOS's scheduler combines Ptides semantics with earliest-deadline-first (EDF). To minimize scheduling overhead associated with context switching, PtidyOS uses a single stack frame for event execution, while still enables event preemptions. The first prototype for PtidyOS is implemented on a Luminary microcontroller. We demonstrate the Ptides workflow through a motion control application.


Rob Wood, Harvard University, Thursday, April 28st, 2011
Time and Location: 4-5pm (521 Cory)

Progress in Insect-Scale Robots

We seek to elucidate how to apply biological principles to the creation of robust, agile, inexpensive robotic insects. However, biological inspiration alone is not sufficient to create robots that mimic the agile locomotion of their arthropod analogs. This is particularly true as the characteristic size of the robot is decreased: to create high performance robotic insects, we must explore novel manufacturing paradigms, develop a greater understanding of complex fluid-structure interactions for flapping-wings, generate high efficiency power and control electronics, create new forms of actuation and sensing, and explore alternative control strategies for computation-limited systems. This talk will describe challenges for the creation of robotic insects and the state of the art for each of these topics. I will also give an overview of the topics we are addressing in the NSF Expeditions in Computing 'RoboBees' project.


Michel Beaudouin-Lafon, University of Paris-Sud, Thursday, April 21st, 2011
Time and Location: 4-5pm (521 Cory)

Lessons from the WILD Room, an Interactive Multi-Surface Environment

Creating the next generation of interactive systems requires experimental platforms that let us explore novel forms of interaction in real settings. The WILD room (Wall-size interaction with large data sets, http://www.lri.fr/~mbl/WILD) is a high-performance interactive visualization environment for exploring the notion of multi-surface interaction. WILD combines an ultra-high resolution wall display, a multitouch table, a motion tracking system and various mobile devices. Our target users are scientists who are confronted with massive amounts of data, including astrophysicists, biologists and neuroscientists. I will describe the design trade-offs and lessons learned during the development of this platform with respect to hardware, interaction, software engineering, and participatory design of applications.


Martin Törngren, KTH, Wednesday, April 20th, 2011
Time and Location: 2-3pm (521 Cory)

Systematic and cost-efficient tool-chains for embedded systems - Trends, research efforts and challenges

The typical embedded systems development environment today consists of many specialized development tools, forming a complex tool landscape with partial integration. The problem picture spans needs for maturing and evolving domain practices (for example immature requirements engineering), cross-domain information management capabilities (for example traceability and configuration management), as well as model-based design (for example formal verification and synthesis capabilities integrated with design models). The iFEST ARTEMIS research project aims to remedy the highly fragmented nature of incompatible tools for embedded systems, with particular emphasis on HW/SW codesign and lifecycle aspects. iFEST is developing a tool integration framework, its realization as part of a number of platforms and tool-chains, and will evaluate the capabilities of the tool-chains in real industrial development settings. Standardization of the framework is an explicit goal for iFEST. We discuss the challenges facing tool integration and the iFEST approach to overcome them.


Jérôme Hugues, ISAE, Toulouse, Tuesday, April 19th, 2011
Time and Location: 4-5pm (540 Cory)

AADL for Cyber-Physical Systems: Semantics and beyond, validate what's next

The SAE Architecture Analysis and Design Language is a design-by-committee standard promoted to help the space and avionics domain. It now extends to a much broader audience, and this language is used in many domains related to Cyber-Physical Systems. AADL is an ADL promoted in the context of Model-Driven Engineering which has now gained a significant momentum in the industry. Models are a valuable asset that should be used and preserved down to the construction of the final system; modeling time and effort should be reduced to focus directly on the system and its realization. Yet, validation&verification may require many different analysis models, involving a strong theoretical background to be mastered. The SAE AADL has been defined to match the concepts understood by any engineer (interface, software or hardware components, packages, generics). From these concepts, typical behavior elements (scheduling and dispatch, communication mechanisms) have been added using both formal and informal description, always bound to theoretical frameworks for V&V. In parallel, the AADL allows one to attach user-defined properties or languages for specific analysis. This enables the application of many different techniques for the analysis of AADL models, among which schedulability, safety, security, fault-propagation, model-checking, resource dimensioning, etc.; but also code generation.

In this talk, we give an overview of the AADL, and discuss how to use its features to analyse in depth a CPS case study.


Herbert Tanner, University of Delaware, Friday April 15th, 2011
Time and Location: 3-4pm (540 Cory)

From Individual to Cooperative Robotic Behavior

Under this general title we discuss different planning and control problems in robotic systems, at progressively more abstract modeling levels. We start with individual and cooperative robot navigation, move on to mobile sensor network coordination, and conclude with behavior planning in interacting robotic hybrid systems. Although the technical approaches and tools at each abstraction level are different, the overarching idea is that these methods can be made to converge into a "correct-by-design" framework for cooperative control, from conceptual high level planning to low level implementation.


Kanna Rajan, MBARI, Tuesday, April 5th, 2011
Time and Location: 4-5pm (540 Cory)

Autonomy from Outer to Inner Space: Automated Inference for Sampling and Control for Marine Robotics

Ocean Sciences the world over is at a cusp, with a move from the Expeditionary to the Observatory mode of doing science. Recent policy decisions in the United States, are pushing the technology for persistent observation and sampling which hitherto had been either economically unrealistic or unrealizable due to technical constraints. With the advent of ocean observatories, a number of key technologies have however proven to be promising for sustained ocean presence. In this context robots will need to be contextually aware and respond rapidly to evolving phenomenon, especially in coastal waters due to the diversity of atmospheric, oceanographic and land-sea interactions not to mention the societal impact they have on coastal communities. They will need to respond by exhibiting scientific opportunism while being aware of their own limitations in the harsh oceanic environment. Current robotic platforms however have inherent limitations; pre-defined sequences of commands are used to determine what actions the robot will perform and when irrespective of the context. As a consequence not only can the robot not recover from unforeseen failure conditions, but they’re unable to significantly leverage their substantial onboard assets to enable scientific discovery.

To mitigate such shortcomings, we are developing deliberative techniques to dynamically command Autonomous Underwater Vehicles (AUV). Our effort is aimed to use a blend of generative and deliberative Artificial Intelligence Planning and Execution techniques to shed goals, introspectively analyze onboard resources and recover from failures. In addition we are working on Machine Learning techniques to adaptively trigger science instruments that will contextually sample the seas driven by scientific intent. The end goal is towards unstructured exploration of the subsea environments that are a rich trove of problems for autonomous systems. Our approach spans domains and not unduly specific to the ocean domain; the developed system is being used for a terrestrial personal robot at a Silicon Valley startup and is being tested on a Planetary rover testbed by the European Space Agency. Our work is a continuum of efforts from research at NASA to command deep space probes and Mars rovers, the lessons of which we have factored into the oceanic domain. In this talk I will articulate the challenges of working in this hostile underwater domain, lay out the differences and motivate our architecture for goal-driven autonomy on AUV’s.


Nora Ayanian, University of Pennsylvania, Tuesday, March 29th, 2011
Time and Location: 2:30-3:30pm (540 Cory)

Automatic Synthesis of Multirobot Feedback Control Policies

Using a group of robots in place of a single complex robot to accomplish a task has many benefits, including simplified system repair, less down time, and lower cost. Combining heterogeneous groups of these multi-robot systems allows addressing multiple subtasks in parallel, reducing the time it takes to address many problems, such as search and rescue, reconnaissance, and mine detection. These missions demand different roles for robots, necessitating a strategy for coordinated autonomy while respecting any constraints the environment may impose. I am interested in synthesizing controllers for heterogeneous multirobot systems, a problem that is particularly challenging because of inter-robot constraints such as communication maintenance and collision avoidance, the need to coordinate robots within groups, and the dynamics of individual robots.

I will present globally convergent feedback policies for navigating groups of heterogeneous robots in known constrained environments. Provably correct by construction, our approach automatically and concurrently solves both the planning and control problems by decomposing the space into cells and sequentially composing local feedback controllers. The approach is useful for navigation and for creating and maintaining formations while maintaining desired communication and avoiding collisions. I will also extend this methodology to large groups of robots by using abstractions to manage complexity. This provides a framework with which navigation of multiple groups in environments with obstacles is possible, and permits scaling to many groups of robots. Finally, we show that this automatic controller synthesis enables the design of feedback policies from user-specified high-level task specifications.


Manfred Broy, Technical University of Munich, Tuesday, March 29th, 2011
Time and Location: 4-5pm (540 Cory)

Multi-Functional Systems: Towards a Theory for Requirements Specification and Architecture Design

This lecture introduces a theory for the identification, modeling, and formalization of two complementary views onto software and software intensive systems called the problem view and the solution view. The problem view addresses requirements engineering for describing the overall system functionality from the users’ point of view aiming at the specification of the functional requirements of multi-functional systems in terms of their functions as well as their mutual dependencies. This view leads to a function or service hierarchy. The solution view essentially addresses the design phase to decompose systems into logical architectures formed by networks of interactive components specified by their interface behavior.

Both views are closely related and are helpful for the structured modeling of multi-functional systems during their development. We show how the two complementary views work and fit together as major milestones in the early phases of software and systems development. We, in particular, base our approach on the FOCUS theory for describing interface behavior and the structuring of systems into components. We give a theoretical treatment of both views by extending the FOCUS model and its interface theory accordingly.


Aaron Ames, Texas A&M, Thursday, March 17th, 2011
Time and Location: 4-5pm (521 Cory)

From Human Data to Bipedal Robotic Walking and Beyond

Humans have the amazing ability to walk with deceptive ease, navigating everything from daily environments to uneven and uncertain terrain with efficiency and robustness. If these same abilities can be imbued into robotic devices, the potential applications are far-reaching: from legged robots for space exploration to the next generation of prosthetic devices.

The purpose of this talk is to present the process of achieving human-like bipedal robotic walking by looking to humans, and specifically human walking data, to design formal models and controllers. The fundamental principle behind this process is that regardless of the complexity present in human walking—hundreds of degrees of freedom coupled with highly nonlinear dynamics and forcing—the essential information needed to understand walking is encoded in simple functions of the kinematics, or “outputs,” of the human, and this fundamental principle can be applied to obtain both models and controllers. At the level of models, we find that all humans display the same temporal ordering of events, or contact points, throughout a walking gait; this information uniquely determines a mathematical hybrid system model for a given bipedal robot. At the level of controllers, we find that humans display simple behavior for certain canonical “output” functions; by designing controllers that achieve the same output behavior in robots, we are able to achieve surprisingly human-like dynamic walking. The method used to achieve this walking allows for extensions beyond robotic walking; it can be used to quantify how “human-like” a walking gait is, and has potential applications to the design and simulation of controllers for prosthetic devices.


Emo Todorov, University of Washington, Tuesday, March 1st, 2011
Time and Location: 4-5pm (540 Cory)

Dynamic Intelligence through Online Optimization

Optimal control is appealing because one can in principle specify the task-level goals, and let numerical optimization figure out the movement details. In practice however such optimization is extremely challenging. Short of guessing the form of the solution and encoding the guess in terms of "features", the only way to get around the curse of dimensionality is to rely on local methods. These methods are most effective when initialized at the current state and applied online, as in Model Predictive Control or MPC. MPC has traditionally been applied to systems with slow and smooth dynamics such as chemical processes. Its applications in robotics are rare, because the requirement for real-time optimization is difficult to meet.

In this talk I will describe our ongoing efforts to make MPC a reality in robotics. These efforts fall in three categories: developing better optimization methods - especially methods that can deal with contact dynamics; developing a fast and accurate physics engine tailored to control rather than simulation; and applying the methodology to a range of simulated and real robots. One of these applications has been a hit on YouTube.

I will also discuss optimal control as a theory of sensorimotor function. This is a well-developed theory but the focus here will be new. It is remarkable that small nervous systems can generate complex, fast and accurate movements that are not inferior to the movements of primates in any obvious way. What then is the difference between large and small brains? Large brains obviously produce more behaviors, however that observation alone does not point to a qualitative difference in terms of neural function (and indeed most neuroscientists studying small brains would like to believe that there is no such difference). I will argue that there is a qualitative difference: large brains contain the optimization machinery which gives rise to new behaviors, while small brains have been optimized by evolution and do not have this machinery. Once optimization machinery is part of the system, it can be used not only for offline learning but also for online fine-tuning (in conjunction with an internal model, as in MPC). This dual use blurs the distinction between learning/planning and execution, in general agreement with recent physiological evidence which suggests that planning and execution are done by the same neural circuits.


David Broman, Linköping University, Thursday, February 17th, 2011
Time and Location: 4-5pm (540 Cory)

Modeling Kernel Language (MKL) - A formal and extensible approach to equation-based modeling languages

Performing computational experiments on mathematical models instead of building and testing physical prototypes can drastically reduce the development cost for complex systems such as automobiles, aircraft, and powerplants. In the past three decades, a category of equation-based modeling languages has appeared that is based on acausal and object-oriented modeling principles, enabling good reuse of models. Examples of these languages are Modelica, VHDL-AMS, and gPROMS. However, the modeling languages within this category have grown to be large and complex, where the specifications of the language's semantics are informally defined, typically described in natural languages. The lack of a formal semantics makes these languages hard to interpret unambiguously and to reason about. In this talk a new research language called the Modeling Kernel Language (MKL) is presented. By introducing the concept of higher-order acausal models (HOAMs), we show that it is possible to create expressive modeling libraries in a manner analogous to Modelica, but using a small and simple language concept. In contrast to the current state-of-the-art modeling languages, the semantics of how to use the models, including meta operations on models, are also specified in MKL libraries. This enables extensible formal executable specifications where important language features are expressed through libraries rather than by adding completely new language constructs. MKL is a statically typed language based on a typed lambda calculus. We define the core of the language formally using operational semantics and prove type safety. An MKL interpreter is implemented and verified in comparison with a Modelica environment.


Vijay Kumar, University of Pennsylvania, Friday February 11th, 2011
Time and Location: 3-4pm (540 Cory)

Autonomous 3-D Flight and Cooperative Control of Multiple Micro Aerial Vehicles

There are now tens of thousands of Unmanned Aerial Vehicles (UAVs) in operation but most UAVs are not autonomous. There are very few micro UAVs capable of navigating indoor or urban environments. I will describe our recent work on control and planning for multi-rotor micro UAVs in complex environments. I will also describe approaches to cooperative control enabling such tasks as persistent surveillance, cooperative manipulation and transport of large payloads.


Lothar Thiele, Swiss Federal Institute of Technology Zurich, Tuesday, February 8th, 2011
Time and Location: 4-5pm (540 Cory)

Embedded Multiprocessors - Performance Analysis and Design

During the system level design process of an embedded system, a designer is typically faced with questions such as whether the timing properties of a certain system design will meet the design requirements, what architectural element will act as a bottleneck, or what the on-chip memory requirements will be. Consequently it becomes one of the major challenges in the design process to analyze specific characteristics of a system design, such as end-to-end delays, buffer requirements, or throughput in an early design stage, to support making important design decisions before much time is invested in detailed implementations. This analysis is generally referred to as system level performance analysis. If the results of such an analysis are able to give guarantees on the overall system behavior, it can also be applied after the implementation phase in order to verify critical system properties.

One of the major requirements for models, methods and tools is their support for a modular, component-based design. This aspect covers as well the composition of the underlying hardware platform as well as the software design. Because of the import role of resource interaction, these components not only need to talk about functional properties but also about resource interaction.

The talk will cover the following aspects of system level performance analysis of distributed embedded systems:

  • Approaches to system-level performance analysis. Requirements in terms of accuracy, scalability, composability and modularity.
  • Modular Performance Analysis (MPA): basic principles, methods and tool support.
  • Examples that show the applicability: An environment to map applications onto multiprocessor platforms including specification, simulation, performance evaluation and mapping of distributed algorithms; analysis of memory access and I/O interaction on shared buses in multi-core systems; worst-case temperature analysis of embedded multiprocessor systems.

Dana Nau, University of Maryland, Thursday, January 13th, 2011
Time and Location: 4-5pm (540 Cory)

Building and Utilizing Predictive Models in Non-Zero-Sum Games

Suppose you are interacting with one or more agents who are unfamiliar to you. How can you decide the best way to behave? Suppose further that the environment is a "noisy" one in which agents' observations of each other's behavior are not always correct. How should this affect your behavior? I will discuss the above questions in the context of the Iterated Prisoner's Dilemma and several other non-zero-sum games. I'll describe algorithms that build predictive models of agents based on observations of their behavior. These models can be used to filter out noise, and to construct strategies that optimize expected utility. Experimental evaluations of these algorithms show that they work quite well. For example, DBS, an agent based on one of the algorithms, placed third out of 165 contestants in an international competition of the Iterated Prisoner's Dilemma with Noise. Only two agents scored higher than DBS, and both of them used a "master-and-slaves" strategy in which a large number of "slave" agents deliberately conspired to raise the score of a single "master" agent.


Resources

Fall 2011 Seminars
Martin Toerngren, KTH, Sweden, Monday, September 19th, 2011

Timing problems and opportunities for embedded control systems; modeling and co-design

This presentation will address timing problems in the design of discrete-time feedback control systems and in their codesign with the embedded systems implementation. The talk will by way of introduction provide an embedded control systems demo to highlight central characteristics such as modes of operation, feed-forward, feedback, different timing constraints and urgence vs. importance. Perspectives to timing problems will be provided including a snapshot of engineering practices (mainly automotive), an overview of different types of timing requirements, deriving timing constraints for feedback controllers and codesign of controllers and their real-time implementation. The latter part of the talk will discuss gaps between control theory and scheduling theory, and address approaches for codesign encompassing different modeling abstractions.

Bio: Martin Toerngren studied Mechanical Engineering with a specialization in Mechatronics at KTH. After completing his Masters (Civ. Ing) in 1987 he began as a teacher at KTH, developing and giving courses in Mechatronics and Embedded systems. He was then offered a PhD student position which resulted in a dissertation in 1995: "Modeling and design of distributed real-time control applications". Since this area was then not very developed, the time as a PhD student included networking and creating PhD student courses. Examples of such activities included starting the interests groups on "Distributed control" and the "Swedish Transputers users group". In 1992 Martin Toerngren participated in the NATO Advanced Study Institute on Real-time computing. After his dissertation, Martin Toerngren co-founded the company Fengco Real-time Control AB, specializing in advanced tools for developers of embedded control systems and related consultancy. The following period, 1996-1999, was dedicated to building up a research group in the area of embedded control systems within the division of Mechatronics at KTH. During 1998, Martin Toerngren, spend 0,5 year as a post-doc at the European Commission Joint Research Centre in Ispra, Italy. He was appointed Professor at KTH for the new chair in Embedded Control Systems in 2002.


Alexandre Donze, Verimag, Grenoble, Monday, September 26th, 2011

Robust Satisfaction of Signal Temporal Logics and Applications

The Signal Temporal Logic (STL) is adapted to specify rigorously properties constraining real-valued, dense-time signals such as traces resulting from the simulation of continuous and hybrid systems. Recently, we extended STL with a quantitative (robust) interpretation which provides a numerical margin by which a simulation trace satisfies or violate a property. Moreover, we can estimate in some cases the sensitivity of this margin to a parameter change. By combining this information with different parameters exploration strategies, we get an efficient methodology to investigate which properties are satisfied by a model, how robustly these properties are satisfied and how to find parameters values which guarantees a robust satisfaction. I will describe this methodology and the toolbox which implements it, Breach, along with different application examples.

Bio: Alexandre Donze is a post-doctoral faculty at Verimag, Grenoble, working in the hybrid and timed system group lead by Oded Maler. This is also where and with whom, in co-supervision with Thao Dang, he did his thesis. From oct. 2007 to oct. 2008, he was a post-doctoral faculty at Carnegie Mellon University, working with Edmund M. Clarke and Bruce H. Krogh. In June 2009 he also started a collaboration with the TIMC laboratory (laboratory of techniques for biomedical engineering and complexity management), in particular with Eric Fanchon and Philippe Tracqui. The main goal of his research is to develop mathematical and computational tools for the analysis and the design of dynamical systems arising from different domains, in particular embedded systems (or software interacting with a physical environment), analog and mixed signal circuits and biological systems.


Christian Motika, Christian-Albrechts-Universitaet, Kiel, Germany, Monday, October 3rd, 2011

Synchronous statecharts for executing Esterel with Ptolemy

The statecharts formalism, proposed by David Harel, is a well known approach for modeling control-intensive tasks such like reactive systems. Synchronous languages like Esterel help to separate concerns w.r.t. the functionality and the timing of reactive systems. SyncCharts are the natural adaption of Harel statecharts to the synchronous world. Esterel programs can be transformed into their graphical SyncCharts counterpart. SyncCharts can be simulated using Ptolemy as a simulation backend. This talk covers an introduction into Esterel, SyncCharts, an overview of the transformation, and the simulation implemented in the KIELER framework.

Bio: Christian Motika is a Ph.D. student at the Christian-Albrechts-Universitaet zu Kiel, in northern Germany. He is part of the Real-Time and Embedded Systems Group of Prof. Reinhard von Hanxleden. Special interests are synchronous languages, model based design and concurrent programming. He finished his master thesis in late 2009 and is part of the collaboration between his group and the group of Prof. Edwards Lee at UC Berkeley.


Vigyan Singhal, Oski Technology, Monday, October 17th, 2011

Deploying Formal in a Simulation World

Missing bugs in hardware designs is expensive enough that the semiconductor industry routinely spends 2/3 of the design cycle in verifying the designs before tapeout. Formal verification technology has today advanced to the stage that it can complement or replace simulation effort for selected hardware designs. However, the adoption of formal in a chip design schedule requires (a) Planning, and deciding where to apply formal; (b) Verifying efficiently, often through the use of manually crafted abstractions; and (c) Measuring coverage at the end of the verification process, to determine how much was formally verified so fit the formal effort in the context of the popular simulation-based coverage metrics.

Bio: Vigyan Singhal is the CEO of Oski Technology, which specializes in applying formal verification on hardware designs. Vigyan was previously the founder and CEO of two VC-funded startups: Jasper Design Automation in EDA, and Elastix in low-power semiconductors. Vigyan has a PhD from the CAD group in UC Berkeley and a BTech in Computer Science from IIT Kanpur, where he graduated at the top of his class.


Jan M. Rabaey, UC Berkeley, Monday, October 24th, 2011

The Swarm at the Edge of the Cloud - the New Face of Wireless

Mobile devices such as laptops, netbooks, tablets, smart phones and game consoles have become our de facto interface to the vast amount of information delivery and processing capabilities of the cloud. The move to mobility has been enabled by the dual forces of ubiquitous wireless connectivity combined with the increasing energy efficiency offered by Moore's law. Yet, a major component of the mobile remains largely untapped: the capability to interact with the world immediately around us. A third layer of information acquisition and processing devices - commonly called the sensory swarm - is emerging, enabled by even more pervasive wireless networking and the introduction of novel ultra-low power technologies. This gives rise to the true emergence of concepts such as cyber-physical and bio-cyber systems, immersive computing, and augmented reality. The functionality of the swarm arises from connections of devices, leading to a convergence between Moore's and Metcalfe's laws, in which scaling refers not any longer to the number of transistors per chip, but rather to the number of interconnected devices. Enabling this fascinating paradigm - which represents true wireless ubiquity - still requires major breakthroughs on a number of fronts. Providing the always-connected abstraction and the reliability needed for many of the intended applications requires a careful balancing of resources that are in high demand: spectrum and energy. The presentation will analyze those challenges, and propose some disruptive solutions that engage the complete stack - from circuit to system. Addressing these concerns is one of the main focus domains of the new EECS Swarm Lab, to be located on the 4th floor of Cory, and to be officially opened on December 6.

Bio: Jan Rabaey received his Ph.D degree in applied sciences from the Katholieke Universiteit Leuven, Belgium. After being connected to UC Berkeley as a Visiting Research Engineer, he was a research manager at IMEC, Belgium. In 1987, he joined the faculty of the Electrical Engineering and Computer Science department of the University of California, Berkeley, where he now holds the Donald O. Pederson Distinguished Professorship. He is currently the scientific co-director of the Berkeley Wireless Research Center (BWRC), as well as the director of the FCRP Multiscale Systems Research Center (MuSyC). He is the recipient of a wide range of awards, amongst which are IEEE Fellow, the 2008 IEEE CAS Society Mac Van Valkenburg Award, and the 2009 European Design Automation Association (EDAA) Lifetime Achievement award. In 2010, he was awarded the prestigious Semiconductor Industry Association (SIA) University Researcher Award. His research interests include the conception and implementation of next-generation integrated wireless systems.


Jan Reineke, UC Berkeley, Monday, October 31st, 2011

Architecture-Parametric Timing Analysis

Interaction of embedded systems with their physical environment often imposes timing constraints on the embedded system's software tasks. A key step in verifying that these constraints are met is to perform worst-case execution time (WCET) analysis of the software tasks. WCET analyses rely on detailed timing models of the execution platform. The development of timing models of modern execution platforms is an extremely time-consuming and error-prone process that furthermore has to be repeated for every new execution platform. We propose to reverse this process: 1. Start by developing a timing model that enables precise and efficient WCET analysis, which can be developed once and for all. 2. Then develop different execution platforms conforming to this model. To allow for a wide range of efficient execution platforms, such a timing model can be parameterized in different architectural aspects, as for instance the sizes of different levels of the memory hierarchy and their respective latencies. In this talk, I will present a class of such parameterized timing models and a precise parametric WCET analysis technique that can be applied to this class of models.

Bio: Jan Reineke received a Bachelor's degree from the University of Oldenburg in 2003 and a Master's from Saarland University in 2005, both in Computer Science. In 2008, he defended his Ph.D. thesis on "Caches in WCET Analysis" at Saarland University. Since 2009, he has been a postdoctoral scholar at the University of California, Berkeley in the group of Edward A. Lee. His research interests include timing predictability with a focus on the memory hierarchy, WCET analysis, and static analysis by abstract interpretation, in particular cache and shape analysis.


Arkadeb Ghosal, National Instruments, Monday, November 7th, 2011

From Streaming Models to Hardware Implementations

We present a graphical programming environment, DSP Designer, for hardware design and motivate key challenges for efficient synthesis and implementation. DSP Designer aims to implement applications specified in streaming models of computation, such as Static and Cyclo-Static Dataflow, on hardware targets, such as FPGAs. Prior studies have shown the effectiveness of these models for specifying multi-rate streaming applications. However, the focus of these works has primarily been on implementations for processor targets. Hardware targets bring forth new challenges related to actor definition, IP integration, and synthesis optimizations. DSP Designer specializes common dataflow models to make them suitable for hardware targets. The framework extends dataflow actor definitions to facilitate component integration and interface synthesis in hardware. The back end supports analysis methods for resource allocation, memory optimization, and scheduler generation. The front end is an interactive graphical interface to enable application design in an intuitive manner. The objective is to deliver a rigorous design and exploration framework that empowers application domain experts to become hardware designers. In this talk, we highlight key concepts underlying DSP Designer, demonstrate preliminary capabilities for exploration and implementation using practical applications, and discuss open challenges of interest to the research community. We also summarize future directions to support heterogeneous multi-target platforms, enable more expressive models of computation that capture control along with dataflow, and include formal timing specifications.


John Eidson, UC Berkeley, Monday, November 14th, 2011

Clock Synchronization, IEEE 1588 and Recent Applications in Cyber-Physical Systems

This talk will cover the basics of network clock synchronization and how synchronized clocks are currently being applied in many areas of importance in industry and our everyday lives. Clock synchronization will be discussed in the context of IEEE 1588 and will highlight the sources of synchronization error, how these errors are being overcome and the rather remarkable results of recent field trials. The application of synchronized clocks in the field of telecommunications will be covered in some detail. Application examples from the power, industrial automation, and data acquisition industries will also be covered.

Bio: John C. Eidson received his BS and MS degrees from Michigan State University and his PhD. Degree from Stanford University all in electrical engineering. He retired in 2009 after a career in the central research laboratories of Varian Associates, Hewlett-Packard and most recently, Agilent Technologies. He is the chairperson of the IEEE 1588 standards committee. He is a life fellow of the IEEE, the recipient of the 2007 Technical Award of the IEEE I&M Society, and a co-recipient of the 2007 Agilent Laboratories Barney Oliver Award for Innovation. He is currently a visiting scholar in the PTIDES group at the University of California at Berkeley and is interested in the application of synchronized clocks to cyber-physical systems.


Timothy Bretl, University of Illinois, Monday, November 28th, 2011

Mechanics and Manipulation of Elastic Kinematic Chains

Consider a flexible wire of fixed length that is held at each end by a robot arm. The curve traced by this wire can be described as the solution to an optimal control problem, with boundary conditions that vary with the position and orientation of each robot. The set of all solutions to this problem is the configuration space of the wire under quasi-static manipulation. I will show that this configuration space is a smooth manifold of finite dimension that can be parameterized by a single chart. Working in this chart --- rather than in the space of boundary conditions --- makes the problem of manipulation planning very easy to solve. I will discuss the reasons why and will consider the application of similar ideas in other contexts, for example inference of human intent based on control-theoretic models of motor function.

Bio: Timothy Bretl received his B.S. in Engineering and B.A. in Mathematics from Swarthmore College in 1999, and his M.S. in 2000 and Ph.D. in 2005 both in Aeronautics and Astronautics from Stanford University. Subsequently, he was a Postdoctoral Fellow in the Department of Computer Science, also at Stanford University. Since 2006, he has been with the University of Illinois at Urbana-Champaign, where he is an Assistant Professor of Aerospace Engineering and a Research Assistant Professor in the Coordinated Science Laboratory. His current interests are at the intersection of robotics and neuroscience.

Spring 2012 Seminars

David Broman, Linköping University, Sweden, UC Berkeley

Predictable Timing of Cyber-Physical Systems - Future Research Challenges

Abstract: Cyber-physical systems (CPS) is an emerging research field that addresses challenges with systems that combine computation, networking, and physical processes. The concept of time is an inherent property of such a system. Missed deadlines for hard real-time applications, such as avionics and control systems in automobiles, can result in devastating life-threatening consequences. Hence, timing predictability for CPS is a correctness criterion, not a quality factor. Extensive research has been performed on defining high-level modeling languages where time is a fundamental concept (e.g., Ptolemy II, Modelica, Simulink). Moreover, currently a new category of processors called precision timed (PRET) machines is being developed that are both predictable and repeatable regarding time. However, there is a semantic gap between high-level languages and PRET machines. In this talk we present research challenges for a new research project in the Ptolemy group, with the objective of bridging this gap. The aim is to establishing a new formal foundation of timing predictability for the semantics of correct translation/compilation from high-level CPS modeling languages down to machine code for PRET machines. The research includes formal proofs of correctness by utilizing computer-based proof assistants as well as implementation of a prototype compiler for practical testing and evaluation.

Bio: David Broman is a visiting scholar at UC Berkeley, working in the Ptolemy group at the Electrical Engineering & Computer Science department. He is an assistant professor at Linköping University in Sweden, where he also received his PhD in computer science in 2010. David is part of the design group developing the Modelica language and he is the main designer of the Modeling Kernel Language (MKL), a domain-specific host language for modeling cyber-physical systems. David's research interests include programming and modeling language theory, software engineering, and mathematical modeling and simulation of cyber-physical systems. He has worked five years within the software security industry and is a member of the Modelica Association.


Adam Cataldo, LinkedIn

Quick Deploy: A distributed systems approach to developer productivity

Abstract: This talk is all about a technical effort I've led to make LinkedIn's engineers more productive. I'll start with an overview of the LinkedIn architecture and how engineers have traditionally gone about making changes. I'll then give an explanation of what Quick Deploy is, and how it speeds up the development cycle for our engineers. This was a big project, and we ran into many gotchas along the way. I'll explain some of the more interesting problems we had to solve, and explain some of the lessons we've learned that might be worth considering when you go build the next big viral technology.

Bio: Adam Cataldo works on the infrastructure to keep LinkedIn scaling as it continues to grow. He's an EECS alumni (PhD 2006 in Edward Lee's group), and he's excited to come back and talk about the cool stuff he's been doing in industry.


Per Ljung, Nokia
Opportunities for Energy Savings in Mobile Devices

Abstract: Tomorrow's promise of mobile devices is always-on computation and communication to enrich our lives. Some of these always-on apps might include: better awareness with augmented reality, individualized suggestions from personal agents, better health with caloric and fitness tracking, multi-media life-blogging, and immersive user interfaces. But there is a slight problem. The battery life of today's mobile devices is typically a day or so -- if we're lucky. Modern handsets typically have a 5Wh battery and use about 0.5W for the display, 1W for the radio, and 1W for the cpu. In worst case this is only 2h of battery life. To enable a full day's use with always-on apps, we need to provide for at least a 10x increase in energy efficiency. This talk will review opportunities in energy sources and sinks for mobile devices, including batteries, displays, communication and computation subsystems to identify several possible 2x, 10x and 100x improvements. The Nokia Research Berkeley lab is optimistic, and we are currently prototyping mobile devices with dramatic improvements in energy efficiency.

Bio: Per Ljung is the team lead of Performance Efficient Mobile Platforms (PEMP) at Nokia Research in Berkeley. He was the founder of two startups working on Darpa funded projects, first with 3D coupled field CAD using accelerated multipole boundary elements and later with high-level correct-by-construction synthesis of synchronous and asynchronous hw/sw systems. Ljung has a PhD from UC Berkeley and a MSc from Kungliga Tekniska Högskolan in Stockholm Sweden.


Stephen Weston, JP Morgan
Why discrete is faster than continuous in computational finance

Abstract: Working across multiple asset classes using discrete mathematics, results are reported from an on-going project to create dataflow computational engines that employ fine-grained parallelism to accelerate complex financial models for trading and risk management.

Bio: Stephen currently heads a group called Applied Analytics within the investment banking division of JP Morgan. The purpose of the group is to accelerate analytic models for trading and risk management at JP Morgan. Prior to his current position he ran the credit hybrids Quantitative Research group in JP Morgan. Before joining JP Morgan Stephen had roles in risk management and analytics at Deutsche Bank, Credit Suisse, UBS and Barclays. Prior to investment banking, Stephen was an academic and taught economics, finance and quantitative methods. Stephen holds a PhD in finance from Cass Business School in London.


Anil Aswani, UC Berkeley
Improving Efficiency and Performance of Partially Engineered Systems Using Statistical Control Theory

Abstract: Partially engineered systems feature interactions between designed and undesigned components, and they are often found in the energy and healthcare domains. Achieving high efficiency and performance in such systems is challenging because of the difficulty in identifying accurate models. For instance, heating, ventilation, and air-conditioning (HVAC) modulates building environment to ensure occupant comfort. And though HVAC can be described by simple physics-based processes, the impact of building occupants makes it difficult to create models for energy-efficient HVAC control. This talk provides examples of two new techniques that rigorously combine statistical methods with control engineering, for the purpose of identification, analysis, and control of partially engineered systems for which models are not well known a priori. The first is a regression technique for identifying linear or local-linear models of systems, which can better reduce noise when the measurements have either manifold or collinear structure by leveraging theory from differential geometry. The second is a control method called learning-based model predictive control (LBMPC) that merges statistical methods into a control framework---allowing statistics to improve performance through identification of better system models, while providing theoretical guarantees on the robustness and stability of the closed-loop control. The improvements in modeling and control possible with these techniques are illustrated with applications to energy-efficient HVAC systems and high-performance control of semi-autonomous systems. Potential applications to the design of new medical treatments for cancer are also described.

Bio: Anil Aswani is currently a postdoctoral researcher in Electrical Engineering and Computer Sciences (EECS) at UC Berkeley. He received his Ph.D. in 2010 from UC Berkeley in EECS, with a Designated Emphasis in Computational and Genomic Biology. In addition, he completed an M.S. in 2007 from UC Berkeley in EECS and a B.S. in Electrical Engineering from the University of Michigan, in Ann Arbor, in 2005.


Senel Murat, Bosch
Wi-Fi Enabled Sensors for Internet of Things: A Practical Approach

Abstract: The vision of Internet of Things (IoT) calls for connectivity not only to consumer electronics and home appliances, but also to small battery powered devices which cannot be recharged. Such small devices, often various types of sensors and actuators, are required to sustain reliable operation for years on batteries even in the presence of heavy interference. The IEEE 802.11 standard has established itself as one of the most popular wireless technologies offering connectivity. For many years, Wi-Fi technology has not been considered as an option for low-power wireless sensing applications because it was not designed for energy efficiency. However, multiple companies have recently developed power-efficient Wi-Fi components with appropriate system design and usage models targeting IoT. Low-power Wi-Fi provides a significant improvement over typical Wi-Fi In terms of power consumption, and battery lifetime heavily depends on operating scenario. Interference studies show that both in-network and out-of-network interferences do not affect reliable communication of sensor devices. The communication range is directly related to the link data rate. In a typical residential building, a single AP operating at 1Mbps, even if not installed in an optimal location, can provide full coverage for all potential sensor locations. This is a joint work between Serbulent Tozlu, Murat Senel, Abtin Keshavarzian and Wei Mao.

Bio: Serbulent Tozlu graduated from Istanbul University as an Electronics Engineer. He then received his M.Sc. degree specializing in Computer Networks from Department of Electrical Engineering at University of Southern California in 2000. After graduation, he started his engineering career as a Research Engineer at Bosch Research & Technology Center located in Palo Alto, CA. As a researcher, he gained several years of experience in various topics like embedded systems, in-car networking, GPS systems, PIR/uW based intrusion detection systems. Later as a member of the Wireless group within Bosch Research, he worked on low-power wireless protocols for residential/commercial building environments specializing in the field of Internet of Things / Web Enabled Sensors with a focus on low-power Wi-Fi systems and 6LoWPAN technology. As of February 2012, he is responsible for IP Services Analysis at Turk Telekom. Murat Senel received his PhD degree in Electrical and Computer Eng. at Purdue University, West Lafayette, IN and B.Sc. degrees in Electrical Eng. and Industrial Eng. at Bogazici University, Turkey, respectively. In 2008, he joined Bosch Research & Technology Center and works on an R&D project on wireless condition monitoring for industrial applications. His research interests are in the area of wireless communication, in particular he is interested in integrating wireless sensor networks into the Internet.


Andrew Tinka, UC Berkeley
Actuated Mobile Sensing in Distributed, Unstructured Environments

Abstract: Actuated mobile sensing in large-scale systems is undergoing a revolution driven by the same technologies which enable smartphones: highly available connectivity, gigahertz-scale embedded processors, and low power electronics. The Floating Sensor Network project at UC Berkeley has developed a fleet of 100 lightly actuated, inexpensive robotic sensors for operations in river and estuary environments. In this talk, I will describe my work as the founding member of the Floating Sensor Network project to design and build both the fleet and the streaming data analytics system required for its successful operation. The research presented includes contributions in systems design, data analysis, planning, and control. The Floating Sensor Network devices incorporate GPS, GSM, 802.15.4 radios, and embedded computer modules to gather data from the environment. The streamed data is integrated into a Partial Differential Equation (PDE) model of shallow water flows, using a tractable quadratic programming formulation. A Zermelo-Voronoi framework is used to drive a decentralized coverage-based gradient descent algorithm for fleet planning using the estimated flow fields. Individual sensors use an optimal control scheme based on the Hamilton-Jacobi-Bellman PDE to achieve their position targets while guaranteeing a safe distance from obstructions in the environment under bounded disturbances. Experimental deployments of the Floating Sensor Network include the Sacramento/San Joaquin Delta and the San Francisco Bay, as well as a disaster response exercise in Stillwater, Oklahoma in cooperation with the US Army Corps of Engineers and the Department of Homeland Security. Applications of the Floating Sensor Network for disaster response, contamination tracing, and estuarial hydrodynamics studies will be discussed.

Bio: Andrew Tinka received the Bachelor of Applied Science in Engineering Physics from the University of British Columbia in 2002. He worked at Powis Parker, Inc. and the Center for Collaborative Control of Unmanned Vehicles for four years as an embedded systems engineer. He earned an M.S. degree in Civil and Environmental Engineering at UC Berkeley and is currently a PhD candidate in Electrical Engineering at UC Berkeley. Andrew Tinka is the recipient of NASA LAUNCH’s “Top Ten Innovators in Water” award and the Center for Entrepreneurship and Technology’s Cleantech Innovation Award for his work on the Floating Sensor Network. His research interests include applied control theory and optimization for actuated sensors in distributed sensing and data analytics problems.


Liangpeng Guo, UC Berkeley
Methods and Tools for Calculating the Flexibility of Automotive HW/SW Architectures

Abstract: To cope with the increasing number of advanced features (e.g., smart-phone integration and side-blind zone alert.) being deployed in vehicles, automotive manufacturers are designing flexible hardware architectures which can accommodate increasing feature content with as fewer as possible hardware changes so as to keep future costs down. In this paper, we propose a formal and quantitative definition of flexibility, a related methodology and a tool flow aimed at maximizing the flexibility of an automotive hardware architecture with respect to the features that are of greater importance to the designer. We define flexibility as the ability of an architecture to accommodate future changes in features with no changes in hardware (no addition/replacement of processors, buses, or memories). We utilize an optimization framework based on mixed integer linear programming (MILP) which computes the flexibility of the architecture while guaranteeing performance and safety requirements.

Bio: Liangpeng Guo is a current graduate student at UC Berkeley. He worked for GM Advanced Technology division as an intern. His research interests are system level modeling and evaluation of complexity, flexibility, and scalability for Electronic Control System (ECS).


Isaac Liu, UC Berkeley
Precision Timed (PRET) Machines

Abstract: Cyber-Physical Systems (CPS) are integrations of computation with physical processes. An number of applications can potentially benefit from the potential of CPS. However, these systems must be equipped to handle the inherent concurrency and inexorable passage of time of physical processes. The traditional computing abstractions only concern themselves with the functional aspects of a program, and not its timing properties. Thus, nearly every abstraction layer has failed to incorporate time into its semantics; the passage of time is merely a consequence of the implementation. When the temporal properties of the system must be guaranteed, designers must reach beneath the abstraction layers. This not only increases the design complexity and effort, but the designed systems are brittle and extremely sensitive to change. In this work, we re-examine the ISA layer and its affects on microarchitecture design. The ISA defines the contract between software instructions and hardware implementations. However, modern ISAs do not specify timing properties of the instructions as part of the contract, thus, architecture designs have largely implemented techniques that improve average performance at the expense of execution time variability. This leads to imprecise WCET bounds that limit the timing predictability and timing composability of architectures. In order to address the lack of temporal semantics in the ISA, we propose instruction extensions to the ISA that give temporal meaning to the program. The instruction extensions allow programs to specify execution time properties in software that must be observed for any correct execution of the program. In addition, we present the Precision Timed ARM (PTARM) architecture, a realization of Precision Timed (PRET) machines that provide timing predictability and composability without sacrificing performance. PTARM employs a predictable thread-interleaved pipeline with an exposed memory hierarchy that uses scratchpads and a predictable DRAM controller. This removes timing interference amongst the hardware threads, enabling timing composability in the architecture, and provides deterministic execution times for all instructions within the architecture, enabling timing predictability in the architecture. We show that the predictable thread-interleaved pipeline and DRAM controller also achieves better throughput compared to conventional architectures when fully utilized, accomplishing our goal to provide both predictability and performance. To show the applicability of the architecture, we present two applications implemented with the PRET architecture that utilize the predictable execution time and the timing extended ISA to achieve its design requirements. With this work, we aim to provide a deterministic foundation for higher abstraction layers, which enables more efficient designs of safety-critical cyber-physical systems.

Bio: Isaac Liu is a graduating Ph.D candidate at UC Berkeley. His advisor is Edward A. Lee, and his research interests are real-time embedded systems and computer architectures. He obtained his B.S. degree in Computer Engineering at UC Santa Barbara, and has previously had internships at Microsoft, NVIDIA, and National Instruments.


John Kubiatowicz, UC Berkeley
Reinventing Operating Systems for the Manycore Ubiquitous Swarm

Abstract: The brave new world of ubiquitous manycore computing systems leads us to reexamine the design of operating systems in the pursuit of responsiveness, realtime guarantees, power efficiency, security, and correctness. In this talk, I will argue that "two-level scheduling" -- the explicit separation of resource allocation and use -- permits an easier framework in which to construct adaptable systems that can provide guaranteed behavior. I will talk about a new resource-aware OS, called "Tessellation", that embodies the notion of "two-level scheduling". I will introduce a new OS primitive, called a "Cell", which is the basic unit of isolation, protection, and scheduling. Cells contain guaranteed fractions of system resources, including gang-scheduled groups of processors, as well as guaranteed fractions of system resources such as caches and memory bandwidth. I will describe "Pacora," a framewor! k for ad apting Cell behavior based on user policies, and describe how Cells provide optimal resource usage through custom user-level schedulers. Ultimately, I'll describe a vision of a hierarchical resource-allocation architecture that spans all levels from the local (person-area networks) to the Cloud.

Bio:John Kubiatowicz is a Professor of EECS at the University of California at Berkeley. Prof. Kubiatowicz received a dual B.S in Physics and Electrical Engineering (1987), as well as an MS in EECS (1993) and PhD in EECS (1998), all from MIT. Kubiatowicz was chosen as one of Scientific American's top 50 researchers in 2002, one of US News and World Report's "people to watch for 2004", and is the recipient of an NSF PCASE award (2000). Professor Kubiatowicz was also co-founder of the $3M DARPA QUIST Quantum Architecture Research Center, which received the DARPATech Most Significant Technical Achievement award in 2002. Kubiatowicz's research interests include manycore Operating Systems, multiprocessor and manycore CPU designs, Internet-scale distributed systems, and quantum computing design tools and architectures.


Ben Lickly, UC Berkeley
Static Model Analysis with Lattice-based Ontologies

Abstract: This thesis demonstrates a correct, scalable and automated method to infer semantic concepts using lattice-based ontologies, given relatively few manual annotations. Semantic concepts and their relationships are formalized as a lattice, and relationships within and between program elements are expressed as a set of constraints. Our inference engine automatically infers concepts wherever they are not explicitly specified. Our approach is general, in that our framework is agnostic to the semantic meaning of the ontologies that it uses. Where practical use-cases and principled theory exist, we provide for the expression of of infinite ontologies and ontology compositions. We also show how these features can be used to express of value-parametrized concepts and structured data types. In order to help find the source of errors, we also present a novel approach to debugging by showing simplified errors paths. These are minimal subsets of the constraints that fail to type-check, and are much more useful than previous approaches in finding the cause of program bugs. We also present examples of how this analysis tool can be used to express analyses of abstract interpretation; physical dimensions and units; constant propagation; and checks of the monotonicity of expressions.

Bio: Ben Lickly is a graduating Ph.D candidate at UC Berkeley. His advisor is Edward A. Lee, and his research interests include static analysis, concurrency models, programming languages, and compiler optimizations. His B.S. degree is from Harvey Mudd College, and he has interned at JPL, Google, and Seoul National University.


Stéphane Lafortune, University of Michigan
Eliminating Concurrency Bugs with Control Engineering

Abstract: We present a control engineering approach to the elimination of certain classes of concurrency bugs in concurrent software. Based on a model of the source code extracted at compile time and represented in the form of a Petri net, control techniques from the field of discrete event systems are employed to analyze the behavior of the concurrent program. The property of deadlock-freedom of the program is mapped to that of liveness of its Petri net model, called a Gadara net. A new methodology for Iterative COntrol of Gadara nets (labeled ICOG), is developed for synthesizing control logic that is liveness-enforcing and maximally-permissive. ICOG relies upon the technique of Supervision Based on Place Invariants developed for Petri nets subject to linear inequality constraints. The synthesized control logic, in the form of monitor places, is then implemented by program instrumentation. At run-time, the control logic will provably enforce deadlock-freedom of the program without altering its control flow. The results presented pertain to the case of circular-wait deadlocks in multithreaded programs employing mutual exclusion locks for shared data. However, the methodology can be applied to tackle other classes of concurrency bugs. This is collaborative work with the members of the Gadara team: http://gadara.eecs.umich.edu

Bio: Stéphane Lafortune is a professor in the Department of Electrical Engineering and Computer Science at the University of Michigan. He obtained his degrees from Ecole Polytechnique de Montreal (B.Eng), McGill University (M.Eng), and the University of California at Berkeley (PhD), all in electrical engineering. He is co-author of the textbook "Introduction to Discrete Event Systems" (Springer, 2008).


Christoph Kirsch, University of Salzburg, Austria
Incorrect Systems: It's not the Problem, It's the Solution

Abstract:We present a brief overview of state-of-the-art work in the engineering of digital systems (hardware and software) where traditional correctness requirements are relaxed, usually for higher performance and lower resource consumption but possibly also for other non-functional properties such as more robustness and less cost. The work presented here is categorized into work that involves just hardware, hardware and software, and just software. In particular, we discuss work on probabilistic and approximate design of processors, unreliable cores in asymmetric multi-core architectures, best-effort computing, stochastic processors, accuracy-aware program transformations, and relaxed concurrent data structures. As common theme we identify, at least intuitively, “metrics of correctness” which appear to be important for understanding the effects of relaxed correctness requirements and their relationship to performance improvements and resource consumption. The focus of the talk will be on relaxed concurrent FIFO queues that outperform and outscale existing algorithms on state-of-the-art multicore hardware. This is joint work with Michael Lippautz, Hannes Payer, and Ana Sokolova.

Bio: Christoph Kirsch is full professor and holds a chair at the Department of Computer Sciences of the University of Salzburg, Austria. Since 2008 he is also a visiting scholar at the Department of Civil and Environmental Engineering of the University of California, Berkeley. He received his Dr.Ing. degree from Saarland University, Saarbruecken, Germany, in 1999 while at the Max Planck Institute for Computer Science. From 1999 to 2004 he worked as Postdoctoral Researcher at the Department of Electrical Engineering and Computer Sciences of the University of California, Berkeley. His research interests are in concurrent programming and systems, virtual execution environments, and embedded software. Dr. Kirsch co-invented the Giotto and HTL languages, and leads the JAviator UAV project for which he received an IBM faculty award in 2007. He co-founded the International Conference on Embedded Software (EMSOFT), has been elected ACM SIGBED chair in 2011, and is currently associate editor of ACM TODAES.


Alain Girault, INRIA, France

Tradeoff exploration between reliability, power consumption, and execution time for embedded systems

Abstract: For autonomous critical real-time embedded systems (e.g., satellite), guaranteeing a very high level of reliability is as important as keeping the power consumption as low as possible. We propose an off-line scheduling heuristic which, from a given software application graph and a given multiprocessor architecture (homogeneous and fully connected), produces a static multiprocessor schedule that optimizes three criteria: its length (crucial for real-time systems), its reliability (crucial for dependable systems), and its power consumption (crucial for autonomous systems). Our tricriteria scheduling heuristic, called TSH, uses the active replication of the operations and the data-dependencies to increase the reliability, and uses dynamic voltage and frequency scaling to lower the power consumption. We provide extensive simulation results to show how TSH behaves in practice: Firstly, we run TSH on a single instance to provide the whole Pareto front in 3D; Secondly, we compare TSH versus the ECS heuristic (Energy-Conscious Scheduling) from the literature; And thirdly, we compare TSH versus an optimal Mixed Linear Integer Program.

Bio: Alain Girault holds a senior research fellow position at INRIA, the French National Institute for Research in Computer Science and Control. He received his PhD from the National Polytechnic Institute of Grenoble in January 1994 after doing his research in the Verimag laboratory. He was a postdoc researcher, in the ESTEREL team in INRIA Sophia-Antipolis in 1995, then in the PTOLEMY group at UC Berkeley in 1996, and in the PATH project at UC Berkeley in 1997. In 2008, he was a visiting scholar in the Department of Electrical and Computer Engineering of the University of Auckland, New Zealand. Since 2005, he has been head of the POP ART team at INRIA Grenoble Rhône-Alpes, that focuses on formal methods for embedded systems. His research interests include the design of reactive systems, with a special concern for distributed implementation, fault-tolerance, reliability, low power, discrete controller synthesis, and data-flow programming.



Fall 2012 Seminars

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Fall 2012

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays from 4.10-5.00 p.m. in the DOP Center Classroom, 540 Cory Hall or in the open area in the DOP Center.

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact David Broman.

Past seminars of this semester are available here. Seminars from previous semesters can be found here. RSS Feed Icon iCal Feed Icon

Past Seminars


Fast-Lipschitz Optimization

Sep 11, 2012, 4.10-5pm, Carlo Fischione, KTH Royal Institute of Technology, Sweden

[Slides]

Abstract: In many optimization problems, decision variables must be computed by algorithms that need to be fast, simple, and robust to errors and noises, both in a centralized and in a distributed set-up. This occurs, for example, in contract based design, sensors networks, smart grids, water distribution, and vehicular networks. In this seminar, a new simple optimization theory, named Fast-Lipschitz optimization, is presented for a novel class of both convex and non-convex scalar and multi-objective optimization problems that are pervasive in the systems mentioned above. Fast-Lipschitz optimization can be applied to both centralized and distributed optimization. Fast-Lipchitz optimization solvers exhibit a low computational and communication complexity when compared to existing solution methods. In particular, compared to traditional Lagrangian methods, which often converge linearly, the convergence time of centralized Fast-Lipschitz algorithms is superlinear. Distributed Fast-Lipschitz algorithms converge fast, as opposed to traditional Lagrangian decomposition and parallelization methods, which generally converge slowly and at the price of many message passings among the nodes. In both cases, the computational complexity is much lower than traditional Lagrangian methods. Fast-Lipschitz optimization is then illustrated by distributed estimation and detection applications in wireless sensor networks.

Bio: Dr. Carlo Fischione is a tenured Associate Professor at KTH Royal Institute of Technology, Electrical Engineering and ACCESS Linnaeus Center, Automatic Control Lab, Stockholm, Sweden. He received the Ph.D. degree in Electrical and Information Engineering in May 2005 from University of L'Aquila, Italy, and the Dr.Eng. degree in Electronic Engineering (Laurea, Summa cum Laude, 5/5 years) in April 2001 from the same University. He held research positions at University of California at Berkeley, Berkeley, CA (2004-2005, Visiting Scholar, and 2007-2008, Research Associate) and Royal Institute of Technology, Stockholm, Sweden (2005-2007, Research Associate). His research interests include optimization and parallel computation with applications to wireless sensor networks, networked control systems, and wireless networks. He has co-authored over 80 publications, including a book, book chapters, international journals and conferences, and an international patent. He received numerous awards, including the best paper award from the IEEE Transactions on Industrial Informatics of 2007, the best paper awards at the IEEE International Conference on Mobile Ad-hoc and Sensor System 05 and 09 (IEEE MASS 2005 and IEEE MASS 2009), the Best Business Idea award from VentureCup East Sweden, 2010, the "Ferdinando Filauro" award from University of L'Aquila, Italy, 2003, the "Higher Education" award from Abruzzo Region Government, Italy, 2004, and the Junior Research award from Swedish Research Council, 2007, the Silver Ear of Wheat award in history from the Municipality of Tornimparte, Italy, 2012. He has chaired or served as a technical member of program committees of several international conferences and is serving as referee for technical journals. Meanwhile, he also has offered his advice as a consultant to numerous technology companies such as Berkeley Wireless Sensor Network Lab, Ericsson Research, Synopsys, and United Technology Research Center. He is Member of IEEE (the Institute of Electrical and Electronic Engineers), SIAM (the Society of Industrial and Applied Mathematics), and Ordinary Member of DASP (the academy of history Deputazione Abruzzese di Storia Patria).


User interface modelling - Model-based UI design

Sep 18, 2012, 4.10-5pm, Hallvard Traetteberg, Norwegian Univ. of Science and Technology (NTNU), Trondheim, Norway

[Slides]

Abstract: User interface modelling is an established cross-disciplinary field, combining elements from Human-Computer Interaction (HCI) and Software Engineering (SE), Information Systems (IS). This talk will present a conceptual overview of the field based on a classification framework developed in my thesis. Important work will be discussed in the context of this framework. Some time will be devoted to my own dialog modelling language Diamodl, since it is (coincidentally) based on an actor model similar to Ptolemy's (which is why I'm here), and how I believe it can be combined in the context of internet-based systems.

Bio: Hallvard Traetteberg is an Associate Professor at the Norwegian Univ. of Science and Technology (NTNU) in Trondheim, Norway, with a PhD in Information Systems. His research interests are model driven engineering in general with a focus on user interface modelling and model-based user interface design. He has developed an dialog modelling language called Diamodl and have experience building both graphical and textual syntaxes for it, as well as a runtime, mostly based on Eclipse.


Enclosing Hybrid Behavior

Oct 17, 2012, 4.10-5pm, Walid Taha, Halmstad University, Sweden, and Rice University, USA
(Joint work with Michal Konecny, Jan Durac, and Aaron Ames)

[Slides] [Related paper]

Abstract: Rigorous simulation of hybrid systems relies critically on having a semantics that constructs enclosures. Edalat and Pattinson's work on the domain-theoretic semantics of hybrid systems almost provides what is needed, with two exceptions. First, domain-theoretic methods leave many operational concerns implicit. As a result, the feasibility of practical implementations is not obvious. For example, their semantics appears to rely on repeated interval splitting for state space variables. This can lead to exponential blow up in the cost of the computation. Second, common and even simple hybrid systems exhibit Zeno behaviors. Such behaviors are a practical impediment because they make simulators loop indefinitely. This is in part due to the fact that existing semantics for hybrid systems generally assume that the system is non-Zeno.
The feasibility of reasonable implementations is addressed by specifying the semantics algorithmically. We observe that the amount of interval splitting can be influenced by the representation of function enclosures. Parameterizing the semantics with respect to enclosure representation provides a precise specification of the functionality needed from them, and facilitates studying their performance characteristics. For example, we find that non-constant enclosure representations can alleviate the need for interval splitting on dependent variables.
We address the feasibility of dealing with Zeno systems by taking a fresh look at event detection and localization. The key insight is that computing enclosures for hybrid behaviors over intervals containing multiple events does not necessarily require separating these events in time, even when the number of events is unbounded. In contrast to current methods for dealing with Zeno behaviors, this semantics does not require reformulating the hybrid system model specifically to enable a transition to a post-Zeno state.
The new semantics does not sacrifice the key qualities of the original work, namely, convergence on separable systems.

Bio: Walid Taha is a Professor of Computer Science at Halmstad University. He is interested in the design, semantics, and implementation of programming and hardware description languages. His current research focus is on modeling, simulation, and verification of cyberphysical systems, and in particular the Acumen modeling language.
Taha is credited with developing the idea of multi-stage programming (or "staging" for short), and is the designer of several systems based on it, including MetaOCaml, ConCoqtion, Java Mint, and the Verilog Preprocessor. He contributed to several other programming languages innovations, including statically typed macros, tag elimination, tagless staged interpreters, event-driven functional reactive programming (E-FRP), the notion of exact software design, and gradual typing. Broadly construed, his research interests include cyberphysical systems, software engineering, programming languages, and domain-specific languages.
Taha was the principal investigator on a number of research awards and contracts from the National Science Foundation (NSF), Semi-conductor Research Consortium (SRC), and Texas Advanced Technology Program (ATP). He received an NSF CAREER award to develop Java Mint. He founded the ACM Conference on Generative Programming and Component Engineering (GPCE), the IFIP Working Group on Program Generation (WG 2.11), and the Middle Earth Programming Languages Seminar (MEPLS). Taha chaired the 2009 IFIP Working Conference on Domain Specific Languages.
According to Google Scholar, Taha's publications had over 2,400 citations and an h-index of 26. Prof. Taha holds an Adjunct Professor position at Rice University.


Time and Schedulability analysis of Stateflow models

Oct 23, 2012, 4.10-5pm, Marco Di Natale Scuola Superiore Sant'Anna of Pisa, Italy.

[Slides]

Abstract: Model-based design of embedded systems using Synchronous Reactive (SR) models is among the best practices for software development in the automotive and aeronautics industry. The correct implementation of an SR model must guarantee the synchronous assumption, that is, all the system reactions complete before the next event. This assumption can be verified using schedulability analysis, but the analysis can be quite challenging when the system also consists of blocks implementing finite state machines, as in modern modeling tools like Simulink and SCADE.
In this talk, we discuss the schedulability analysis of such systems, including the applicability of traditional task analysis methods and an algorithmic solution to compute the exact demand and request bound functions.
In addition, we define conditions for computing these functions using a periodic recurrent term, even when there is no cyclic recurrent behavior in the model.

Bio: Prof. Marco Di Natale is IEEE Senior member and Associate Professor at the Scuola Superiore Sant'Anna of Pisa, Italy, where he was Director of the Real-Time Systems (ReTiS) Lab. He received his PhD from Scuola Superiore Sant'Anna and was a visiting Researcher at the University of California, Berkeley in 2006 and 2008-2009, principal investigator for architecture exploration and selection at General Motors R&D in 2006 and 2007 and is currently visiting fellow for United Technologies Research. He's been a researcher in real-time and embedded systems for more than 15 years, author of more than 130 papers, winner of five best paper awards and two presentation awards. He is also member of the editorial board of the IEEE Transactions on Industrial Informatics and chair for the embedded systems track of the IEEE Industrial Electronics Society.


Beyond the Hill of Multicores lies the Valley of Accelerators

Oct 30, 2012, 4.10-5pm, Aviral Shrivastava, Arizona State University, USA

[Slides]

Abstract: The power wall has resulted in a sharp turn in processor designs, and they irrevocably went multi-core. Multi-cores are good because they promise higher potential throughput (and never mind the actual performance of your applications). This is because the cores can be made simpler and run at lower voltage resulting in much more power-efficient operation. Even though the performance of single-core is much reduced, the total possible throughput of the system scales with the number of cores. However, the excitement of multi-core architectures will only last so long. This is not only because the benefits of voltage scaling will reduce with decreasing voltage, but also because after some point, making a core simpler will only be detrimental and may actually increase power-efficiency. What next! How do we further improve power-efficiency?
Beyond the hill of multi-cores, lies the valley of accelerators. Accelerators: hardware accelerators (e.g., Intel SSE), software accelerators (e.g., VLIW accelerators), reconfigurable accelerators (e.g., FPGAs), programmable accelerators (CGRAs) are some of the foreseeable solutions that can further improve power-efficiency of computation. Among these, we find CGRAs, or Coarse Grain Reconfigurable Arrays a very promising technology. They are slightly reconfigurable (and therefore close to hardware), but are programmable (therefore usable as more general-purpose accelerators). As a result, they can provide power-efficiencies of up to 100 GOps/W, while being relatively general purpose. Although very promising, several challenges remain in compilation for CGRAs, especially because they have very little dynamism in the architecture, and almost everything (including control) is statically determined. In this talk, I will talk about our recent research in developing compiler technology to enable CGRAs as general-purpose accelerators.

Bio: Prof. Aviral Shrivastava is Associate Professor in the School of Computing Informatics and Decision Systems Engineering at the Arizona State University, where he has established and heads the Compiler and Microarchitecture Labs (CML) (http://aviral.lab.asu.edu/). He received his Ph.D. and Masters in Information and Computer Science from University of California, Irvine, and bachelors in Computer Science and Engineering from Indian Institute of Technology, Delhi. He is a 2011 NSF CAREER Award Recipient, and recipient of 2012 Outstanding Junior Researcher in CSE at ASU.
His research lies at the intersection of compilers and architectures of embedded and multi-core systems, with the goal of improving power, performance, temperature, energy, reliability and robustness. His research is funded by NSF and several industries including Microsoft, Raytheon Missile Systems, Intel, Nvidia, etc. He serves on organizing and program committees of several premier embedded system conferences, including ISLPED, CODES+ISSS, CASES and LCTES, and regularly serves on NSF and DOE review panels. Right now, he is a visiting faculty in the EECS department at University of California, Berkeley.


Closing the loop with Medical Cyber-Physical Systems

Nov 2, 2012, 2.10-3pm, Rahul Mangharam, University of Pennsylvania, USA

[Slides]

Abstract: The design of bug-free and safe medical device software is challenging, especially in complex implantable devices that control and actuate organs whose response is not fully understood. Safety recalls of pacemakers and implantable cardioverter defibrillators between 1990 and 2000 affected over 600,000 devices. Of these, 200,000 or 41%, were due to firmware issues (i.e. software) that continue to increase in frequency. There is currently no formal methodology or open experimental platform to test and verify the correct operation of medical device software within the closed-loop context of the patient. IN this talk I will describe our efforts to develop the foundations of modeling, synthesis and development of verified medical device software and systems from verified closed-loop models of the device and organs. The research spans both implantable medical devices such as cardiac pacemakers and physiological control systems such as drug infusion pumps which have multiple networked medical systems. In both cases, the devices are physically connected to the body and exert direct control over the physiology and safety of the patient. With the goal to develop a tool-chain for certifiable software for medical devices, I will walk through (a) formal modeling of the heart and pacemaker in timed automata, (b) verification of the closed-loop system, (c) automatic model translation from UPPAAL to Stateflow for simulation-based testing, and (d) automatic code generation for platform-level testing of the heart and real pacemakers.
If time permits, I will describe our investigations in distributed wireless control networks and green scheduling for energy-efficient building automation.

Bio: Rahul Mangharam is the Stephen J Angello Chair and Assistant Professor in the Dept. of Electrical & Systems Engineering and Dept. of Computer & Information Science at the University of Pennsylvania. He directs the Real-Time and Embedded Systems Lab at Penn. His interests are in real-time scheduling algorithms for networked embedded systems with applications in energy-efficient buildings, automotive systems, medical devices and industrial control networks. His group has won several awards in IPSN 2012, RTAS 2102, World Embedded Programming Competition 2010, Honeywell Industrial Wireless Award 2011, Google Zeitgeist Award 2011, Intel Innovators Award 2012, Intel Early Faculty Honor 2012, NAE US Frontiers 2012, Accenture Innovation Jockeys 2012, etc.
He received his Ph.D. in Electrical & Computer Engineering from Carnegie Mellon University where he also received his MS and BS. In 2002, he was a member of technical staff in the Ultra-Wide Band Wireless Group at Intel Labs. He was an international scholar in the Wireless Systems Group at IMEC, Belgium in 2003. He has worked on ASIC chip design at FORE Systems (1999) and Gigabit Ethernet at Apple Computer Inc. (2000). mLAB - http://mlab.seas.upenn.edu


Computing without Processors

Nov 13, 2012, 4.10-5pm, Satnam Singh, Google, USA

Abstract: The duopoly of computing has up until now been delimited by drawing a line in the sand that defines the instruction set architecture as the hard division between software and hardware. On one side of this contract Intel improved the design of processors and on the other side of this line Microsoft developed ever more sophisticated software. This cozy relationship is now over as the distinction between hardware and software is blurred due to relentless pressure for performance and reduction in latency and energy consumption. Increasingly we will be forced to compute with architectures and machines which do not resemble regular processors with a fixed memory hierarchy based on heuristic caching schemes. Other ways to bake all that sand will include the evolution of GPUs and FPGAs to form heterogeneous computing resources which are much better suited to meeting our computing needs than racks of multicore processors. This presentation will highlight some of the programming challenges we face when trying to develop for heterogeneous architectures and a few promising lines of attack are identified.

Bio: Prof. Singh works in the Technical Infrastructure division of Google in Mountain View, California and focuses on the configuration management of Google's data-center services. Previously Prof. Singh worked on the design of heterogeneous systems at Microsoft Research in Cambridge UK and on parallel programming techniques at Microsoft's Developer Division in Redmond USA. He has also worked on re-configurable computing and formal verification at Xilinx in San Jose, California and as an academic at the University of Glasgow. He also currently holds a part-time position as the Chair of Reconfigurable Systems at the University of Birmingham.


T-CREST: Time-predictable Multi-Core Architecture for Embedded Systems

Nov 16, 2012, 2.10-3pm, Martin Schoeberl, Technical University of Denmark

[Slides]

Abstract: The T-CREST project is developing a time-predictable system that will simplify the safety argument with respect to maximum execution time while striving to increase the performance with multicore processors. T-CREST looks at time-predictable solutions for processors, the memory hierarchy, the on-chip interconnect, and the compiler. T-CREST is a 3 year project, funded by the EC. It has just passed the first year. In this talk I will give an overview of the T-CREST project, the individual sub-projects, and present some early results on the on-chip interconnect and the processor research.

Bio: Martin Schoeberl is associate professor at the Technical University of Denmark, at the Department of Informatics and Mathematical Modelling. He completed his PhD at the Vienna University of Technology in 2005 and received the Habilitation in 2010. Martin Schoeberl's research focus is on time-predictable computer architectures and on Java for hard real-time systems. During his PhD studies he developed the time-predictable Java processor JOP, which is now in use in academia and in industrial projects. His research on time-predictable computer architectures is currently embedded in the EC funded project T-CREST.


Synchronous Control and State Machines in Modelica

Nov 27, 2012, 4.10-5pm, Hilding Elmqvist, Dassault Systemes AB, Sweden

[Slides]

Abstract: The scope of Modelica has been extended from a language primarily intended for physical systems modeling to modeling of complete systems by allowing the modeling of control systems and by enabling automatic code generation for embedded systems. Much focus has been given to safe constructs and intuitive and well-defined semantics.
The presentation will describe the fundamental synchronous language primitives introduced for increased correctness of control systems implementation. The approach is based on associating clocks to the variable types. Special operators are needed when accessing variables of another clock. This enables clock inference and increased correctness of the code since many more checks can be done during translation. Furthermore, the sampling period of a clocked partition needs to be defined only at one place (either in absolute time or relatively to other clocked partitions). The principles of partitioning a system model into different clocks (continuous, periodic, non-periodic, multi-rate) will be explained. The new language elements follow the synchronous approach. They are based on the clock calculus and inference system of Lucid Synchrone. However, the Modelica approach also uses multi-rate periodic clocks based on rational arithmetic and also non-periodic and event based clocks are supported.
Parallel and hierarchical state machines will be introduced including submodels within states. The supporting Modelica library will also be introduced.

Bio: Elmqvist's Ph.D. thesis from the Department of Automatic Control, Lund Institute of Technology contains the design of a novel object-oriented and equation based modelling language, Dymola, and algorithms for symbolic model manipulation.
In 1992, Elmqvist founded Dynasim AB and in 1996 he took the initiative to organize the international effort to design the next generation object-oriented language for physical modelling, Modelica. Elmqvist is the chief architect of the Multi-Engineering Modelling and Simulation software used in Dymola Product Line and CATIA Systems DBM and responsible for Technology within the board of Modelica Association.


Sensor fusion in dynamical systems - applications and research challenges

Dec 11, 2012, 4.10-5pm, Thomas Schon, Linkoping University, Sweden.

[Slides]

Abstract: Sensor fusion refers to the problem of computing state estimates using measurements from several different, often complementary, sensors. The strategy is explained and (perhaps more importantly) illustrated using four different industrial/research applications, very briefly introduced below. Guided partly by these applications we will highlight key directions for future research within the area of sensor fusion. Given that the number of available sensors is skyrocketing this technology is likely to become even more important in the future. The four applications are; 1. Real-time pose estimation and autonomous landing of the helicopter (using inertial sensors and a camera). 2. Pose estimation of a helicopter using an already existing map (a processed version of an aerial photograph of the operational area), inertial sensors and a camera. 3. Vehicle motion and road surface estimation (using inertial sensors, steering wheel sensor and an infrared camera). 4. Indoor pose estimation of a human body (using inertial sensors and ultra-wideband).

Bio: Thomas B. Schon is an Associate Professor with the Division of Automatic Control at Linkoping University (Linkoping, Sweden). He received the BSc degree in Business Administration and Economics in Jan. 2001, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001 and the PhD degree in Automatic Control in Feb. 2006, all from Linkoping University. He has held visiting positions with the University of Cambridge (UK) and the University of Newcastle (Australia). He is a Senior member of the IEEE. He received the best teacher award at the Institute of Technology, Linkoping University in 2009. Schon's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods. He is active within the fields of machine learning, signal processing and automatic control. He pursue both basic research and applied research, where the latter is typically carried out in collaboration with industry. More information about his research can be found from his home page: users.isy.liu.se/rt/schon



Spring 2013 Seminars


Spring 2013 Seminars


Hybrid System Verification: Progress & Simulations

Jan 22, 2013, 4.10-5pm, Sayan Mitra, University of Illinois at Urbana-Champaign, USA.

Abstract: Design defects in embedded systems can be elusive and expensive. Verification research aims to provide algorithms that automate the process of finding design bugs and in special cases even prove correctness. In this talk, I will present two recent developments in this area from our research: (1) a framework for verification of progress properties and (2) an approach for proving properties from simulations or dynamic data. The first builds on a connection between termination analysis of programs and control theory. It provides algorithms for automatic stability verification for rectangular hybrid models. Augmented with Lyapunov functions these algorithms can handle nonlinear hybrid models as well. The second approach combines static analysis of the system model with dynamically generated data (e.g. simulations or logs) for bounded verification of nonlinear and distributed systems of medium dimensions.

Bio: Sayan Mitra is an Assistant Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research aims to develop mathematical, algorithmic and software tools for design and analysis of distributed and cyber-physical systems. Sayan graduated from MIT in 2007 and spent one year as a post-doctoral researcher at the Center for Mathematics of Information of CalTech. His research group has received several awards; most recently, the National Science Foundation's CAREER award in 2011, AFOSR Young Investigator Research Program Award in 2012, Best paper award at FORTE/FMOODS 2012, Samsung GRO Award 2012.


Networking Infrastructure and Data Management for Cyber-Physical Systems

Feb 5, 2013, 4.10-5pm, Song Han, University of Texas at Austin, USA.

[Slides]

Abstract: A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge.
In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. I will also present a light-weight and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer. This approach makes the the underlying connectivity technologies transparent to the application developers and thus enables rapid application development and efficient migration among different CPS platforms.

Bio: Song Han is currently a Postdoctoral Research Fellow in the University of Texas at Austin. He holds the B.S. degree in computer science from Nanjing University, the M.Phil. degree in computer science from City University of Hong Kong, and the Ph.D. degree in computer science from the University of Texas at Austin. His research interests include Cyber-Physical Systems, Real-Time and Embedded Sensing and Control Systems, Wireless Networks and Mobile Computing.


Automated and semi-automated memory management in real-time

Feb 19, 2013, 4.10-5pm, Jan Vitek, Purdue University, USA.

[Slides]

Abstract: This talk looks back at techniques for rescuing real-time programmers from the tyranny of object pools and manual memory management. I will present a retrospective look at region allocation and real-time garbage collection for hard real-time systems. I will demonstrate the impact of scheduling techniques on predictability and present experimental results obtained on the Fiji Java virtual machine.

Bio: Vitek is Full Professor and Faculty Scholar in the Computer Science Department at Purdue University. He led the development of the Ovm hard real-time Java virtual machine. He is a founder of Fiji systems. His research interests span from programming language design and implementation to real-time and embedded systems. He is the chair of SIGPLAN.


DREAMS tutorial: The particle filter

Feb 20, 2013, 2.10-4pm, Thomas Schön, Linköping University, Sweden

[Slides]

Abstract: The particle filter provides a solution to the state inference problem in nonlinear dynamical systems. This problem is indeed interesting in its own right, but it also shows up as a sub-problem in many relevant areas, such as for example sensor fusion and nonlinear system identification. The aim of this tutorial is to provide you with sufficient knowledge about the particle filter to allow you to start implementing particle filters on your own.

We will start out by providing a brief introduction to probabilistic modeling of dynamical systems in order to be able to clearly define the nonlinear state inference problem under consideration. The next step is to briefly introduce two basic sampling methods, rejection sampling and importance sampling. The latter is then exploited to derive a first working particle filter. The particle filter can be interpreted as a particular member of a general class of algorithms referred to as sequential Monte Carlo (SMC). This relationship is explored in some detail in order to provide additional understanding.

The particle filtering theory has developed at an increasing rate over the last two decades and it is used more and more in solving various applied problems. During this tutorial I focus on the method and to some extent on the underlying theory. Hence, I will not show any real world examples, I save them for my seminar on Thursday, where I will show how the particle filter has been instrumental in solving various nontrivial localization problems.

Bio: Thomas B. Schön is an Associate Professor with the Division of Automatic Control at Linköping University (Linköping, Sweden). He received the BSc degree in Business Administration and Economics in Jan. 2001, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001 and the PhD degree in Automatic Control in Feb. 2006, all from Linköping University. He has held visiting positions with the University of Cambridge (UK) and the University of Newcastle (Australia). He is a Senior member of the IEEE. He received the best teacher award at the Institute of Technology, Linköping University in 2009. Schön's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods. He is active within the fields of machine learning, signal processing and automatic control. He pursue both basic research and applied research, where the latter is typically carried out in collaboration with industry.


Platform Design and Compositional Analysis of Real-Time Embedded Systems

Feb 26, 2013, 4.10-5pm, Linh Thi Xuan Phan, University of Pennsylvania, USA.

Abstract: Real-time embedded systems are becoming increasingly complex; a modern car now contains up to 100 microprocessors that run from 2000 to 3000 software functions, and the next generation of these systems is even more complex. Despite active research on real-time analysis over the past decades, ensuring timing guarantees of these systems in the presence of resource constraints remains a challenge.

In this talk, I will present two of our recent results in this area, which are inspired by techniques in systems and computer networks: 1) I will show how to use traffic shaping to improve the schedulability and the resource interfaces of real-time components with compositional scheduling; and 2) I will present a control-platform co-design method that uses buffer control mechanisms to optimize resources while guaranteeing control stability. If time permits, I will also discuss some ongoing projects, e.g., on multi-mode mixed-criticality real-time systems and cloud computing.

Bio: Linh Thi Xuan Phan is a Research Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania. She received her B.S. degree in 2003 and Ph.D. degree in 2009 in Computer Science from the National University of Singapore (NUS). Her research interests center on developing theoretical methods and tools for designing and analyzing real-time embedded systems, cyber-physical systems, and cloud computing systems. She was a recipient of the Graduate Research Excellence Award from NUS in 2009 for her dissertation.


Tomorrow is Yesterday (or nearly so): Historical Lessons that Foretell the Future of Mobile and (Connected) Embedded Systems

March 5, 2013, 4.10-5pm, Bob Iannucci, Carnegie Mellon University, Silicon Valley Campus, USA.

[Slides]

Abstract: The evolution of mobile computing strongly resembles the evolution of the three generations of computing that preceded it, but with a few crucial differences. Like the past generations, value shifts between hardware, software and services in fairly predictable ways. Unlike past generations, mobile computing is encumbered both by the complexity of distributed computing and by the hard limits imposed by physics and human physiology. Both mobile and connected embedded computing face issues related to power and scale.

This talk examines the evolution of mobile computing, probes some of the difficult problems at its heart, and identifies a handful of challenges that could drive new research. We will examine some of these in detail. We also examine connected embedded computing and its evolution toward a true platform.

Bio: Bob Iannucci is Director of the CyLab Mobility Research Center and is a Distinguished Service Professor at Carnegie Mellon University. He received his Ph.D. in Electrical Engineering and Computer Science from MIT in 1988. Most recently, he served as Chief Technology Officer of Nokia and Head of Nokia Research Center. Under his leadership, NRC made primary contributions to wireless networking standards; created and promulgated what is now the MIPI UniPro interface for high-speed, in-phone interconnectivity; created and commercialized Bluetooth Low Energy; and, together with UC Berkeley, created TrafficWorks (using mobile phones to crowd-source traffic patterns). He remains active as a hands-on systems builder. His most recent iPhone app for radio direction finding is in use in over 70 countries.


Pure Storage Flash Array: An Embedded System on Steroids

March 12, 2013, 4.10-5pm, Marco Sanvido, Pure Storage, USA.

Abstract: The storage industry is currently in the midst of a flash revolution. Today's smartphones, cameras, USB drives, and even laptop computers all use flash storage. However, the $30 billion a year enterprise storage market is still dominated by spinning disk. Flash has large advantages in speed and power consumption, but its disadvantages (price, limited overwrites, large erase block size) have prevented it from being a drop-in replacement for disk in a storage array. The question facing the industry is how to build a competitive hardware-software solution that turns flash into performant, reliable storage scaling to hundreds of TBs and beyond.

In this talk, we describe the design and technical challanges of the Pure FlashArray, an enterprise storage array built from the ground up around consumer flash storage. The array and its software, Purity, play to the advantages of flash while minimizing the downsides. Purity performs all writes to flash in multiples of the erase block size, and it keeps metadata in a key-value store that persists approximate answers, further reducing writes at the cost of extra (cheap) reads. Purity also reduces data stored on flash through a range of techniques, including compression, deduplication, and thin provisioning. The net result is a flash array that delivers a sustained read-write workload of over 100,000 4kb I/O requests per second while maintaining uniform sub-millisecond latency. With many customers seeing 4x or greater data reduction, the Pure FlashArray ends up being cheaper than disk as well.

Bio: Dr Marco Sanvido holds a Dipl.-Ing. degree (1996) and a Dr.techn. degree (2002) in Computer Science from the Swiss Federal Institute of Technology in Zürich, Switzerland (ETHZ). He was a co-founder of weControl, an ETHZ spin-off, where he developed a low-power and real-time embedded systems for autonomous flying vehicles. He was a postdoctoral researcher in Computer Science at the University of California at Berkeley from 2002 to 2004, and thereafter he worked on virtualization at VMware. In 2005 he then became a researcher at Hitachi Global Storage Technologies, where he worked on hard disk drive and solid state drive architectures. Since 2010 Marco joined Pure Storage as a Principal Software Engineer.


Compositionality in system design: interfaces everywhere!

March 19, 2013, 4.10-5pm, Stavros Tripakis, UC Berkeley, USA.

[Slides]

Abstract: Compositional methods, which allow to assemble smaller components into larger systems both efficiently and correctly, are not simply a desirable feature in system design: they are a must for building large and complex systems. A key ingredient for compositionality is that of an "interface". An interface abstracts a component, exposing relevant information while hiding internal details. Motivated by cyber-physical system applications, in this talk I will first present methods for automatic bottom-up synthesis of interfaces from hierarchical models, which is needed for modular code generation from such models. I will then talk about interface theories, which can be seen as behavioral type theories, and are essential for incremental design (when can a component be replaced by another one without compromising the properties of the entire system?).

Bio: Stavros Tripakis obtained a PhD degree in Computer Science at the Verimag Laboratory in Grenoble, France, in 1998. He was a postdoc at UC Berkeley from 1999 to 2001, a CNRS Research Scientist at Verimag from 2001 to 2006, and a Research Scientist at Cadence Research Labs in Berkeley from 2006 to 2008. He is an Associate Researcher at UC Berkeley since 2009, working on model-based and component-based design, verification, testing and synthesis. Dr. Tripakis was co-Chair of the 10th ACM & IEEE Conference on Embedded Software (EMSOFT 2010), Secretary/Treasurer of ACM SIGBED in 2009-2011, and is the current Vice-Chair of ACM SIGBED. His h-index, according to Google scholar, is 35.


Components, Interfaces and Compositions: from SoCs to SOCs

March 26, 2013, 4.10-5pm, Partha Roop, University of Auckland, New Zealand.

[Slides]

Abstract: The widespread advancement of engineering may be attributed to the use of production lines that achieve "compositions of many components" to create complex systems. Examples can be seen in the mass production of computer chips to cars, where "reuse of components" is fundamental. Reuse is possible due to the standardization of the "interface" of the different components. However, reuse is impossible when manufacturers create "incompatible" interfaces.

In computer science, components and interfaces have played a fundamental role. Like conventional engineering domains, in computer science also, we see parallels: both success stories and difficulties in reuse. Successes are almost always related to standardized interfaces and associated algorithms for reuse and composition. In this talk, I will focus on two domains where standardized interfaces may be used successfully: System-on-a-Chip (SoC) and Service Oriented Computing (SOC). While these two domains are diverse and appear to be quite separated, I will present a unifying solution that fits many problems across both domains. In presenting this solution, I will elaborate on techniques for component reuse and component composition that are based on formal methods. In particular, I will rely on simulation relations, discrete controller synthesis, and model/module checking to solve classical issues in both these domains. I will then present some interesting reuse and composition case studies.

Bio: Partha is a Senior Lecturer and is currently the Program Director of the Computer Systems Engineering program at the University of Auckland, New Zealand. His research interests are in the area of Embedded Systems, Real-Time Systems and static timing analysis.

He is particularly interested in static analysis techniques for validation, safety and certification. Partha is an Associate Editor of Elsevier Journal on Embedded Hardware design (MICPRO) and Springer/EURASIP Journal on Embedded Systems. In 2009, he received the Alexander von Humboldt fellowship for experienced researchers from the Humboldt foundation, Germany. Partha has been a visiting professor at Iowa State University, INRIA (France), AIST (Japan) and Kiel University (Germany).


Models and Architectures for Heterogeneous System Design

April 16, 2013, 4.10-5pm, Andreas Gerstlauer, University of Texas at Austin, USA.

[Slides]

Abstract: With traditional boundaries between embedded and general-purpose computing blurring, both domains increasingly deal with complex, heterogeneous, software-intensive yet tightly constrained systems. This creates new challenges for modeling, synthesis and component design at the intersection of applications and architectures. For modeling of inherently dynamic behavior, simulations continue to play an important role. We first present an abstract, host-compiled modeling approach that provides a fast and accurate alternative to traditional solutions. By navigating tradeoffs in coarse-grain executions, full-system simulations in close to real time with more than 90% accuracy become possible. Models then provide the basis for automated design space exploration as part of a system compiler that can automatically synthesize parallel application models onto optimized hardware/software platforms. Finally, we will discuss results on co-design of novel domain-specific components for future heterogeneous platforms. We specifically explore fundamental tradeoffs between flexibility and specialization, and between error acceptance and energy in the design of high-performance yet low-power processors for linear algebra and signal processing applications.

Bio: Andreas Gerstlauer is an Assistant Professor in Electrical and Computer Engineering at The University of Texas at Austin. He received his Ph.D. in Information and Computer Science from the University of California, Irvine (UCI) in 2004. Prior to joining UT Austin in 2008, he was an Assistant Researcher in the Center for Embedded Computer Systems (CECS) at UC Irvine, leading a research group to develop electronic system-level (ESL) design tools. Dr. Gerstlauer is co-author on 3 books and more than 60 conference and journal publications, and his paper on OS modeling was reprinted as one of the most influential contributions in 10 years at DATE. His research interests include system-level design automation, system modeling, design languages and methodologies, and embedded hardware and software synthesis.


Pushback Rate Control: The Design and Field-Testing of an Airport Congestion Control Algorithm

April 23, 2013, 4.10-5pm, Hamsa Balakrishnan, Massachusetts Institute of Technology (MIT), USA.

Abstract: Increased congestion on the airport surface has increased taxi times, fuel consumption, and emissions. In this talk, I will describe how operational data can be used to develop and validate queuing models of airport operations. These models yield new insights on the effect of different factors such as weather, runway usage, and aircraft fleet mix on airport performance. They also help us predict the behavior of an airport under different operating conditions.

I will then show that these queuing models can be used to design Pushback Rate Control, a new airport congestion control technique to regulate the rate at which flights push back from their gates. The algorithm computes optimal pushback rates using approximate dynamic programming, but equally important, is a method that can be implemented in practice because it works in concert with human air traffic controllers. To evaluate the effectiveness and practicality of the algorithm, we conducted field tests with our implementation at Boston's Logan Airport. I will describe the results of these field tests and what we learned in the process.

Bio: Hamsa Balakrishnan is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT). She received a B.Tech in Aerospace Engineering from the Indian Institute of Technology, Madras and a PhD in Aeronautics and Astronautics from Stanford University. Her research interests address various aspects of air transportation systems, including algorithms for air traffic scheduling and routing, integrating weather forecasts into air traffic management and minimizing aviation-related emissions; air traffic surveillance algorithms; and mechanisms for the allocation of airport and airspace resources. She was a recipient of the NSF CAREER Award in 2008, the Kevin Corker Award for Best Paper of ATM-2011, the AIAA Lawrence Sperry Award in 2012, and the inaugural CNA Award for Operational Analysis in 2012.


Going beyond the FPGA with Spacetime

April 30, 2013, 4.10-5pm, Steve Teig, Tabula, USA.

Abstract: The idea of dynamically reconfiguring programmable devices fascinated Turing in the 1930�s. In the early 90�s, DeHon pioneered dynamic reconfiguration within FPGAs, but neither his nor numerous subsequent efforts, both academic and industrial, resulted in a useful and usable product. Over the last several years, we have significantly advanced the hardware, architecture, and software for rapidly reconfiguring, programmable logic: going beyond the FPGA using a body of technology we call �Spacetime�. Spacetime represents two spatial dimensions and one time dimension as a unified 3D framework: a powerful simplification that has enabled us to deliver in production a new category of programmable devices (�3PLDs�) that are far denser, faster, and more capable than FPGAs yet still accompanied by software that automatically maps traditional RTL onto these exotic fabrics. In developing Spacetime, we encountered and resolved many complex, technical challenges that any dense and high-performance reconfigurable device must face, many of which seem never even to have been identified, much less addressed, by any prior effort. In this talk, I will identify some key limitations of FPGAs, introduce Spacetime as a means of addressing them, enumerate some of the many challenges we faced, and present solutions to a couple of them.

Bio: Steve Teig is the President and Chief Technology Officer of Tabula and the inventor of Tabula's Spacetime 3-Dimensional Programmable Logic Architecture. In 2012, Tabula was recognized with an Edison Award, an American Business Gold Award, inclusion in MIT's "TR50" list of the 50 most innovative companies worldwide, and as #3 in the Wall Street Journal's "Next Big Thing" list of the most promising venture-backed startup companies. Prior to founding Tabula, Steve was CTO of Cadence Design Systems, having joined Cadence through its acquisition of Simplex Solutions, where he was also CTO. Prior to Simplex, he was CTO of two successful biotechnology companies: CombiChem and BioCAD. Earlier in his career, he developed foundational logic simulation and place-and-route technologies that continue to have far-reaching influence. Steve received a B.S.E. degree in Electrical Engineering and Computer Science from Princeton University and holds over 250 patents. In 2011, he was awarded the World Technology Award for IT hardware innovation.


Reachability of Hybrid Systems in Space-Time

May 7, 2013, 4.10-5pm, Goran Frehse, Verimag Labs, University of Grenoble, France.

[Slides]

Abstract: Computing the reachable states of a hybrid system is hard, in part because efficient algorithms are available mainly for convex sets, while the states reachable by time elapse, so-called flowpipes, are generally nonconvex. When the dynamics are piecewise affine, the nonconvexity takes a particular form: when drawn along a time axis, the flowpipe is convex at any point in time. We exploit this property to lift efficient approximation techniques from convex sets to flowpipes. The approximation consists of a set of piecewise linear scalar functions over continuous time, which define at each time point a polyhedral inner- and outer approximation. Reachability operations that are difficult to carry out on nonconvex sets correspond to simple operations on these scalar functions. The resulting reachability algorithm is precise, scalable, and provides good error measurements. We present an implementation in the SpaceEx verification platform and showcase the performance on several examples.

Bio: Goran Frehse is an assistant professor of computer science at Verimag Labs, University of Grenoble, France. He holds a Diploma in Electrical Engineering from Karlsruhe Institute of Technology, Germany, and a PhD in Computer Science from Radboud University Nijmegen, The Netherlands. He was a PostDoc at Carnegie Mellon University from 2003 to 2005. Goran Frehse developed PHAVer, a tool for verifying Linear Hybrid Automata, and is leading the ongoing development of SpaceEx, a verification platform for hybrid systems.


Mining Temporal Requirements of an Industrial-Scale Control System

May 14, 2013, 4.10-5pm, Alexandre Donzé, EECS, UC Berkeley.

[Slides]

Abstract: Industrial-scale control systems are more and more developed in the model-based design paradigm. This typically involves capturing a plant model that describes the dynamical characteristics of the physical processes within the system, and a controller model, which is a block-diagram-based representation of the software used to regulate the plant behavior. In practice, plant models and controller models are highly complex as they can contain nonlinear hybrid dynamics, look-up tables storing pre-computed values, several levels of design-hierarchy, design-blocks that operate at different frequencies, and so on. Moreover, the biggest challenge is that system requirements are often imprecise, non-modular, evolving, or even simply unknown. As a result, formal techniques have been unable to digest the scale and format of industrial-scale control systems.

On the other hand, the Simulink modeling language -- a popular language for describing such models -- is widely used as a high-fidelity simulation tool in the industry, and is routinely used by control designers to experimentally validate their controller designs. This raises the question: "What can we accomplish if all we have is a very complex Simulink model of a control system?"

In this talk, we give an example of a simulation-guided formal technique that can help characterize temporal properties of the system, or guide the search for design behaviors that do not conform to "good behavior". Specifically, we present a way to algorithmically mine temporal assertions from a Simulink model. The input to our algorithm is a requirement template expressed in Parametric Signal Temporal Logic -- a formalism to express temporal formulas in which concrete signal or time values are replaced by parameters. Our algorithm is an instance of counterexample-guided inductive synthesis: an intermediate candidate requirement is synthesized from simulation traces of the system, which is refined using counterexamples to the candidate obtained with the help of a falsification tool. The algorithm terminates when no counterexample is found. Mining has many usage scenarios: mined requirements can be used to validate future modifications of the model, they can be used to enhance understanding of legacy models, and can also guide the process of bug-finding through simulations. (Joint work with X. Jin, J. Deshmuck and S. Seshia).

Bio: Alexandre Donzé is a post-doctoral faculty at the department of Electrical Engineering and Computer Science of UC Berkeley, working under the supervision of Sanjit Seshia. He holds a PhD of Mathematics and Computer Science from the University of Joseph Fourier at Grenoble. From 2008 to 2009, he was also a postdoc at the university of Carnegie Mellon, working with Edmund Clarke and Bruce Krogh. The main goal of his research is to develop mathematical and computational tools for the analysis and the design of dynamical systems arising from different domains, in particular embedded systems (or software interacting with a physical environment), analog and mixed signal circuits and biological systems. He developed Breach, a toolbox that is today getting the attention of large groups in the automative industry such as Toyata and Bosh, and (modestly) contributed to Spaceex, the current leading model-checking tool for hybrid systems.


Logic of Hybrid Games

May 20, 2013, 4.10-5pm, Andre Platzer, Carnegie Mellon University, USA.

Abstract: Hybrid systems model cyber-physical systems as dynamical systems with interacting discrete transitions and continuous evolutions along differential equations. They arise frequently in many application domains, including aviation, automotive, railway, and robotics. We study hybrid games, i.e. games on hybrid systems combining discrete and continuous dynamics. Unlike hybrid systems, hybrid games allow choices in the system dynamics to be resolved adversarially by different players with different objectives.

This talk describes how logic and formal verification can be lifted to hybrid games. The talk describes a logic for hybrid systems called differential game logic dGL. The logic dGL can be used to study the existence of winning strategies for hybrid games. We present a simple sound and complete axiomatization of dGL relative to the fixpoint logic of differential equations. We prove hybrid games to be determined and their winning regions to require higher closure ordinals and we identify separating axioms, i.e. axioms that distinguish hybrid games from hybrid systems.

Bio: Andre Platzer is an Assistant Professor in the Computer Science Department at Carnegie Mellon University. He is working on logic and verification for cyber-physical systems and led the development of the KeYmaera prover for hybrid systems. Andre Platzer has a Ph.D. from the University of Oldenburg, Germany, and received an ACM Doctoral Dissertation Honorable Mention and NSF CAREER award, and was named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI's 10 to Watch by the IEEE Intelligent Systems Magazine.


Co-design of Control Algorithm and Embedded Platform for Building HVAC Systems

May 21, 2013, 4.10-5pm, Mehdi Maasoumy, EECS, UC Berkeley.

[Slides]

Abstract: The design of heating, ventilation and air conditioning (HVAC) systems is crucial for reducing energy consumption in buildings. As complex cyber-physical systems, HVAC systems involve three closely-related subsystems � the control algorithm, the physical building and environment and the embedded implementation platform. In the traditional topdown approach, the control algorithm and the embedded platform are in general designed separately leading to sub-optimal systems. We propose a co-design approach that analyzes the interaction between the control algorithm and the embedded platform through a set of interface variables. We present six control algorithms that take into account the sensing error, and model the relation of control performance and cost versus sensing error. We also capture the relation of embedded platform cost versus sensing error by analysis of the collected data from a testbed. Based on these models, we explore the co-design of the control algorithm and the temperature sensing subsystem of the embedded platform to optimize with respect to energy cost and monetary cost while satisfying the constraints for user comfort level.

Bio: Mehdi Maasoumy is a PhD candidate in Mechanical Engineering at University of California, Berkeley. His research interests include modeling and optimal control of linear, nonlinear, and hybrid systems with applications in energy efficient buildings, smart grid, and aircraft electric power systems. He has a M.Sc. degree from University of California, Berkeley and a B.Sc. degree from Sharif University of technology, both in Mechanical Engineering. He is the recipient of the Best Paper Award at the 4th ACM/IEEE international conference on cyber-physical systems and Outstanding Preliminary Examination Award from UC Berkeley. More information on his work is available at: http://eecs.berkeley.edu/~maasoumy/


Schedulability, Deadlock Freedom, and Performance Analysis of Timed Actors

June 4, 2013, 4.10-5pm, Marjan Sirjani, Reykjavik University, Iceland.

[Slides]

Abstract: Asynchronous event-based systems can be modeled using Timed Rebeca with least effort. Network and computational delays, periodic events, and required deadlines can be expressed in the model. The language is actor-based and the syntax is Java-like. Model checking and simulation tools are built based on the formal semantics of the language given as SOS rules. For deadlock-freedom and schedulability analysis a so-called floating-time transition system is introduced which reduced the state space by exploiting the isolation of method execution in the model. For performance analysis, a simulation tool is developed which uses McErlang as the backend. Simulation traces are saved in SQL database for further analysis. Case studies are used to show the applicability of the language and the tools.

Bio: Marjan Sirjani is an Associate Professor at Reykjavik University. Marjan and her research group are pioneers in building model checking tools, compositional verification theories, and state-space reduction techniques for actor-based models. She has been working on analyzing actors since 2001 using the modeling language Rebeca (www.rebeca-lang.org). She has also worked on coordination languages and is the co-author of the paper proposing Constraint Automata as the first compositional formal semantics for the coordination language Reo. Marjan has been the PC member and PC chair of several international conferences including Coordination, FM, FMICS, ICFEM, FSEN, and guest editor for special issues of the journals Science of Computer Programming and Fundamenta Informaticae. Before joining academia as a full-time faculty she has been the managing director of Behin System Company for more than ten years, developing software and providing system services. Marjan served as the head of the Software Engineering Department of School of Electrical and Computer Engineering at the University of Tehran prior to joining the School of Computer Science at Reykjavik University in 2008.


Past Seminars


Hybrid System Verification: Progress & Simulations

Jan 22, 2013, 4.10-5pm, Sayan Mitra, University of Illinois at Urbana-Champaign, USA.

Abstract: Design defects in embedded systems can be elusive and expensive. Verification research aims to provide algorithms that automate the process of finding design bugs and in special cases even prove correctness. In this talk, I will present two recent developments in this area from our research: (1) a framework for verification of progress properties and (2) an approach for proving properties from simulations or dynamic data. The first builds on a connection between termination analysis of programs and control theory. It provides algorithms for automatic stability verification for rectangular hybrid models. Augmented with Lyapunov functions these algorithms can handle nonlinear hybrid models as well. The second approach combines static analysis of the system model with dynamically generated data (e.g. simulations or logs) for bounded verification of nonlinear and distributed systems of medium dimensions.

Bio: Sayan Mitra is an Assistant Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research aims to develop mathematical, algorithmic and software tools for design and analysis of distributed and cyber-physical systems. Sayan graduated from MIT in 2007 and spent one year as a post-doctoral researcher at the Center for Mathematics of Information of CalTech. His research group has received several awards; most recently, the National Science Foundation's CAREER award in 2011, AFOSR Young Investigator Research Program Award in 2012, Best paper award at FORTE/FMOODS 2012, Samsung GRO Award 2012.


Networking Infrastructure and Data Management for Cyber-Physical Systems

Feb 5, 2013, 4.10-5pm, Song Han, University of Texas at Austin, USA.

[Slides]

Abstract: A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge.
In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. I will also present a light-weight and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer. This approach makes the the underlying connectivity technologies transparent to the application developers and thus enables rapid application development and efficient migration among different CPS platforms.

Bio: Song Han is currently a Postdoctoral Research Fellow in the University of Texas at Austin. He holds the B.S. degree in computer science from Nanjing University, the M.Phil. degree in computer science from City University of Hong Kong, and the Ph.D. degree in computer science from the University of Texas at Austin. His research interests include Cyber-Physical Systems, Real-Time and Embedded Sensing and Control Systems, Wireless Networks and Mobile Computing.


Automated and semi-automated memory management in real-time

Feb 19, 2013, 4.10-5pm, Jan Vitek, Purdue University, USA.

[Slides]

Abstract: This talk looks back at techniques for rescuing real-time programmers from the tyranny of object pools and manual memory management. I will present a retrospective look at region allocation and real-time garbage collection for hard real-time systems. I will demonstrate the impact of scheduling techniques on predictability and present experimental results obtained on the Fiji Java virtual machine.

Bio: Vitek is Full Professor and Faculty Scholar in the Computer Science Department at Purdue University. He led the development of the Ovm hard real-time Java virtual machine. He is a founder of Fiji systems. His research interests span from programming language design and implementation to real-time and embedded systems. He is the chair of SIGPLAN.


DREAMS tutorial: The particle filter

Feb 20, 2013, 2.10-4pm, Thomas Schön, Linköping University, Sweden

[Slides]

Abstract: The particle filter provides a solution to the state inference problem in nonlinear dynamical systems. This problem is indeed interesting in its own right, but it also shows up as a sub-problem in many relevant areas, such as for example sensor fusion and nonlinear system identification. The aim of this tutorial is to provide you with sufficient knowledge about the particle filter to allow you to start implementing particle filters on your own.

We will start out by providing a brief introduction to probabilistic modeling of dynamical systems in order to be able to clearly define the nonlinear state inference problem under consideration. The next step is to briefly introduce two basic sampling methods, rejection sampling and importance sampling. The latter is then exploited to derive a first working particle filter. The particle filter can be interpreted as a particular member of a general class of algorithms referred to as sequential Monte Carlo (SMC). This relationship is explored in some detail in order to provide additional understanding.

The particle filtering theory has developed at an increasing rate over the last two decades and it is used more and more in solving various applied problems. During this tutorial I focus on the method and to some extent on the underlying theory. Hence, I will not show any real world examples, I save them for my seminar on Thursday, where I will show how the particle filter has been instrumental in solving various nontrivial localization problems.

Bio: Thomas B. Schön is an Associate Professor with the Division of Automatic Control at Linköping University (Linköping, Sweden). He received the BSc degree in Business Administration and Economics in Jan. 2001, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001 and the PhD degree in Automatic Control in Feb. 2006, all from Linköping University. He has held visiting positions with the University of Cambridge (UK) and the University of Newcastle (Australia). He is a Senior member of the IEEE. He received the best teacher award at the Institute of Technology, Linköping University in 2009. Schön's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods. He is active within the fields of machine learning, signal processing and automatic control. He pursue both basic research and applied research, where the latter is typically carried out in collaboration with industry.


Platform Design and Compositional Analysis of Real-Time Embedded Systems

Feb 26, 2013, 4.10-5pm, Linh Thi Xuan Phan, University of Pennsylvania, USA.

Abstract: Real-time embedded systems are becoming increasingly complex; a modern car now contains up to 100 microprocessors that run from 2000 to 3000 software functions, and the next generation of these systems is even more complex. Despite active research on real-time analysis over the past decades, ensuring timing guarantees of these systems in the presence of resource constraints remains a challenge.

In this talk, I will present two of our recent results in this area, which are inspired by techniques in systems and computer networks: 1) I will show how to use traffic shaping to improve the schedulability and the resource interfaces of real-time components with compositional scheduling; and 2) I will present a control-platform co-design method that uses buffer control mechanisms to optimize resources while guaranteeing control stability. If time permits, I will also discuss some ongoing projects, e.g., on multi-mode mixed-criticality real-time systems and cloud computing.

Bio: Linh Thi Xuan Phan is a Research Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania. She received her B.S. degree in 2003 and Ph.D. degree in 2009 in Computer Science from the National University of Singapore (NUS). Her research interests center on developing theoretical methods and tools for designing and analyzing real-time embedded systems, cyber-physical systems, and cloud computing systems. She was a recipient of the Graduate Research Excellence Award from NUS in 2009 for her dissertation.


Tomorrow is Yesterday (or nearly so): Historical Lessons that Foretell the Future of Mobile and (Connected) Embedded Systems

March 5, 2013, 4.10-5pm, Bob Iannucci, Carnegie Mellon University, Silicon Valley Campus, USA.

[Slides]

Abstract: The evolution of mobile computing strongly resembles the evolution of the three generations of computing that preceded it, but with a few crucial differences. Like the past generations, value shifts between hardware, software and services in fairly predictable ways. Unlike past generations, mobile computing is encumbered both by the complexity of distributed computing and by the hard limits imposed by physics and human physiology. Both mobile and connected embedded computing face issues related to power and scale.

This talk examines the evolution of mobile computing, probes some of the difficult problems at its heart, and identifies a handful of challenges that could drive new research. We will examine some of these in detail. We also examine connected embedded computing and its evolution toward a true platform.

Bio: Bob Iannucci is Director of the CyLab Mobility Research Center and is a Distinguished Service Professor at Carnegie Mellon University. He received his Ph.D. in Electrical Engineering and Computer Science from MIT in 1988. Most recently, he served as Chief Technology Officer of Nokia and Head of Nokia Research Center. Under his leadership, NRC made primary contributions to wireless networking standards; created and promulgated what is now the MIPI UniPro interface for high-speed, in-phone interconnectivity; created and commercialized Bluetooth Low Energy; and, together with UC Berkeley, created TrafficWorks (using mobile phones to crowd-source traffic patterns). He remains active as a hands-on systems builder. His most recent iPhone app for radio direction finding is in use in over 70 countries.


Pure Storage Flash Array: An Embedded System on Steroids

March 12, 2013, 4.10-5pm, Marco Sanvido, Pure Storage, USA.

Abstract: The storage industry is currently in the midst of a flash revolution. Today's smartphones, cameras, USB drives, and even laptop computers all use flash storage. However, the $30 billion a year enterprise storage market is still dominated by spinning disk. Flash has large advantages in speed and power consumption, but its disadvantages (price, limited overwrites, large erase block size) have prevented it from being a drop-in replacement for disk in a storage array. The question facing the industry is how to build a competitive hardware-software solution that turns flash into performant, reliable storage scaling to hundreds of TBs and beyond.

In this talk, we describe the design and technical challanges of the Pure FlashArray, an enterprise storage array built from the ground up around consumer flash storage. The array and its software, Purity, play to the advantages of flash while minimizing the downsides. Purity performs all writes to flash in multiples of the erase block size, and it keeps metadata in a key-value store that persists approximate answers, further reducing writes at the cost of extra (cheap) reads. Purity also reduces data stored on flash through a range of techniques, including compression, deduplication, and thin provisioning. The net result is a flash array that delivers a sustained read-write workload of over 100,000 4kb I/O requests per second while maintaining uniform sub-millisecond latency. With many customers seeing 4x or greater data reduction, the Pure FlashArray ends up being cheaper than disk as well.

Bio: Dr Marco Sanvido holds a Dipl.-Ing. degree (1996) and a Dr.techn. degree (2002) in Computer Science from the Swiss Federal Institute of Technology in Zürich, Switzerland (ETHZ). He was a co-founder of weControl, an ETHZ spin-off, where he developed a low-power and real-time embedded systems for autonomous flying vehicles. He was a postdoctoral researcher in Computer Science at the University of California at Berkeley from 2002 to 2004, and thereafter he worked on virtualization at VMware. In 2005 he then became a researcher at Hitachi Global Storage Technologies, where he worked on hard disk drive and solid state drive architectures. Since 2010 Marco joined Pure Storage as a Principal Software Engineer.


Compositionality in system design: interfaces everywhere!

March 19, 2013, 4.10-5pm, Stavros Tripakis, UC Berkeley, USA.

[Slides]

Abstract: Compositional methods, which allow to assemble smaller components into larger systems both efficiently and correctly, are not simply a desirable feature in system design: they are a must for building large and complex systems. A key ingredient for compositionality is that of an "interface". An interface abstracts a component, exposing relevant information while hiding internal details. Motivated by cyber-physical system applications, in this talk I will first present methods for automatic bottom-up synthesis of interfaces from hierarchical models, which is needed for modular code generation from such models. I will then talk about interface theories, which can be seen as behavioral type theories, and are essential for incremental design (when can a component be replaced by another one without compromising the properties of the entire system?).

Bio: Stavros Tripakis obtained a PhD degree in Computer Science at the Verimag Laboratory in Grenoble, France, in 1998. He was a postdoc at UC Berkeley from 1999 to 2001, a CNRS Research Scientist at Verimag from 2001 to 2006, and a Research Scientist at Cadence Research Labs in Berkeley from 2006 to 2008. He is an Associate Researcher at UC Berkeley since 2009, working on model-based and component-based design, verification, testing and synthesis. Dr. Tripakis was co-Chair of the 10th ACM & IEEE Conference on Embedded Software (EMSOFT 2010), Secretary/Treasurer of ACM SIGBED in 2009-2011, and is the current Vice-Chair of ACM SIGBED. His h-index, according to Google scholar, is 35.


Components, Interfaces and Compositions: from SoCs to SOCs

March 26, 2013, 4.10-5pm, Partha Roop, University of Auckland, New Zealand.

[Slides]

Abstract: The widespread advancement of engineering may be attributed to the use of production lines that achieve "compositions of many components" to create complex systems. Examples can be seen in the mass production of computer chips to cars, where "reuse of components" is fundamental. Reuse is possible due to the standardization of the "interface" of the different components. However, reuse is impossible when manufacturers create "incompatible" interfaces.

In computer science, components and interfaces have played a fundamental role. Like conventional engineering domains, in computer science also, we see parallels: both success stories and difficulties in reuse. Successes are almost always related to standardized interfaces and associated algorithms for reuse and composition. In this talk, I will focus on two domains where standardized interfaces may be used successfully: System-on-a-Chip (SoC) and Service Oriented Computing (SOC). While these two domains are diverse and appear to be quite separated, I will present a unifying solution that fits many problems across both domains. In presenting this solution, I will elaborate on techniques for component reuse and component composition that are based on formal methods. In particular, I will rely on simulation relations, discrete controller synthesis, and model/module checking to solve classical issues in both these domains. I will then present some interesting reuse and composition case studies.

Bio: Partha is a Senior Lecturer and is currently the Program Director of the Computer Systems Engineering program at the University of Auckland, New Zealand. His research interests are in the area of Embedded Systems, Real-Time Systems and static timing analysis.

He is particularly interested in static analysis techniques for validation, safety and certification. Partha is an Associate Editor of Elsevier Journal on Embedded Hardware design (MICPRO) and Springer/EURASIP Journal on Embedded Systems. In 2009, he received the Alexander von Humboldt fellowship for experienced researchers from the Humboldt foundation, Germany. Partha has been a visiting professor at Iowa State University, INRIA (France), AIST (Japan) and Kiel University (Germany).


Models and Architectures for Heterogeneous System Design

April 16, 2013, 4.10-5pm, Andreas Gerstlauer, University of Texas at Austin, USA.

[Slides]

Abstract: With traditional boundaries between embedded and general-purpose computing blurring, both domains increasingly deal with complex, heterogeneous, software-intensive yet tightly constrained systems. This creates new challenges for modeling, synthesis and component design at the intersection of applications and architectures. For modeling of inherently dynamic behavior, simulations continue to play an important role. We first present an abstract, host-compiled modeling approach that provides a fast and accurate alternative to traditional solutions. By navigating tradeoffs in coarse-grain executions, full-system simulations in close to real time with more than 90% accuracy become possible. Models then provide the basis for automated design space exploration as part of a system compiler that can automatically synthesize parallel application models onto optimized hardware/software platforms. Finally, we will discuss results on co-design of novel domain-specific components for future heterogeneous platforms. We specifically explore fundamental tradeoffs between flexibility and specialization, and between error acceptance and energy in the design of high-performance yet low-power processors for linear algebra and signal processing applications.

Bio: Andreas Gerstlauer is an Assistant Professor in Electrical and Computer Engineering at The University of Texas at Austin. He received his Ph.D. in Information and Computer Science from the University of California, Irvine (UCI) in 2004. Prior to joining UT Austin in 2008, he was an Assistant Researcher in the Center for Embedded Computer Systems (CECS) at UC Irvine, leading a research group to develop electronic system-level (ESL) design tools. Dr. Gerstlauer is co-author on 3 books and more than 60 conference and journal publications, and his paper on OS modeling was reprinted as one of the most influential contributions in 10 years at DATE. His research interests include system-level design automation, system modeling, design languages and methodologies, and embedded hardware and software synthesis.


Pushback Rate Control: The Design and Field-Testing of an Airport Congestion Control Algorithm

April 23, 2013, 4.10-5pm, Hamsa Balakrishnan, Massachusetts Institute of Technology (MIT), USA.

Abstract: Increased congestion on the airport surface has increased taxi times, fuel consumption, and emissions. In this talk, I will describe how operational data can be used to develop and validate queuing models of airport operations. These models yield new insights on the effect of different factors such as weather, runway usage, and aircraft fleet mix on airport performance. They also help us predict the behavior of an airport under different operating conditions.

I will then show that these queuing models can be used to design Pushback Rate Control, a new airport congestion control technique to regulate the rate at which flights push back from their gates. The algorithm computes optimal pushback rates using approximate dynamic programming, but equally important, is a method that can be implemented in practice because it works in concert with human air traffic controllers. To evaluate the effectiveness and practicality of the algorithm, we conducted field tests with our implementation at Boston's Logan Airport. I will describe the results of these field tests and what we learned in the process.

Bio: Hamsa Balakrishnan is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT). She received a B.Tech in Aerospace Engineering from the Indian Institute of Technology, Madras and a PhD in Aeronautics and Astronautics from Stanford University. Her research interests address various aspects of air transportation systems, including algorithms for air traffic scheduling and routing, integrating weather forecasts into air traffic management and minimizing aviation-related emissions; air traffic surveillance algorithms; and mechanisms for the allocation of airport and airspace resources. She was a recipient of the NSF CAREER Award in 2008, the Kevin Corker Award for Best Paper of ATM-2011, the AIAA Lawrence Sperry Award in 2012, and the inaugural CNA Award for Operational Analysis in 2012.


Going beyond the FPGA with Spacetime

April 30, 2013, 4.10-5pm, Steve Teig, Tabula, USA.

Abstract: The idea of dynamically reconfiguring programmable devices fascinated Turing in the 1930�s. In the early 90�s, DeHon pioneered dynamic reconfiguration within FPGAs, but neither his nor numerous subsequent efforts, both academic and industrial, resulted in a useful and usable product. Over the last several years, we have significantly advanced the hardware, architecture, and software for rapidly reconfiguring, programmable logic: going beyond the FPGA using a body of technology we call �Spacetime�. Spacetime represents two spatial dimensions and one time dimension as a unified 3D framework: a powerful simplification that has enabled us to deliver in production a new category of programmable devices (�3PLDs�) that are far denser, faster, and more capable than FPGAs yet still accompanied by software that automatically maps traditional RTL onto these exotic fabrics. In developing Spacetime, we encountered and resolved many complex, technical challenges that any dense and high-performance reconfigurable device must face, many of which seem never even to have been identified, much less addressed, by any prior effort. In this talk, I will identify some key limitations of FPGAs, introduce Spacetime as a means of addressing them, enumerate some of the many challenges we faced, and present solutions to a couple of them.

Bio: Steve Teig is the President and Chief Technology Officer of Tabula and the inventor of Tabula's Spacetime 3-Dimensional Programmable Logic Architecture. In 2012, Tabula was recognized with an Edison Award, an American Business Gold Award, inclusion in MIT's "TR50" list of the 50 most innovative companies worldwide, and as #3 in the Wall Street Journal's "Next Big Thing" list of the most promising venture-backed startup companies. Prior to founding Tabula, Steve was CTO of Cadence Design Systems, having joined Cadence through its acquisition of Simplex Solutions, where he was also CTO. Prior to Simplex, he was CTO of two successful biotechnology companies: CombiChem and BioCAD. Earlier in his career, he developed foundational logic simulation and place-and-route technologies that continue to have far-reaching influence. Steve received a B.S.E. degree in Electrical Engineering and Computer Science from Princeton University and holds over 250 patents. In 2011, he was awarded the World Technology Award for IT hardware innovation.


Reachability of Hybrid Systems in Space-Time

May 7, 2013, 4.10-5pm, Goran Frehse, Verimag Labs, University of Grenoble, France.

[Slides]

Abstract: Computing the reachable states of a hybrid system is hard, in part because efficient algorithms are available mainly for convex sets, while the states reachable by time elapse, so-called flowpipes, are generally nonconvex. When the dynamics are piecewise affine, the nonconvexity takes a particular form: when drawn along a time axis, the flowpipe is convex at any point in time. We exploit this property to lift efficient approximation techniques from convex sets to flowpipes. The approximation consists of a set of piecewise linear scalar functions over continuous time, which define at each time point a polyhedral inner- and outer approximation. Reachability operations that are difficult to carry out on nonconvex sets correspond to simple operations on these scalar functions. The resulting reachability algorithm is precise, scalable, and provides good error measurements. We present an implementation in the SpaceEx verification platform and showcase the performance on several examples.

Bio: Goran Frehse is an assistant professor of computer science at Verimag Labs, University of Grenoble, France. He holds a Diploma in Electrical Engineering from Karlsruhe Institute of Technology, Germany, and a PhD in Computer Science from Radboud University Nijmegen, The Netherlands. He was a PostDoc at Carnegie Mellon University from 2003 to 2005. Goran Frehse developed PHAVer, a tool for verifying Linear Hybrid Automata, and is leading the ongoing development of SpaceEx, a verification platform for hybrid systems.


Mining Temporal Requirements of an Industrial-Scale Control System

May 14, 2013, 4.10-5pm, Alexandre Donzé, EECS, UC Berkeley.

[Slides]

Abstract: Industrial-scale control systems are more and more developed in the model-based design paradigm. This typically involves capturing a plant model that describes the dynamical characteristics of the physical processes within the system, and a controller model, which is a block-diagram-based representation of the software used to regulate the plant behavior. In practice, plant models and controller models are highly complex as they can contain nonlinear hybrid dynamics, look-up tables storing pre-computed values, several levels of design-hierarchy, design-blocks that operate at different frequencies, and so on. Moreover, the biggest challenge is that system requirements are often imprecise, non-modular, evolving, or even simply unknown. As a result, formal techniques have been unable to digest the scale and format of industrial-scale control systems.

On the other hand, the Simulink modeling language -- a popular language for describing such models -- is widely used as a high-fidelity simulation tool in the industry, and is routinely used by control designers to experimentally validate their controller designs. This raises the question: "What can we accomplish if all we have is a very complex Simulink model of a control system?"

In this talk, we give an example of a simulation-guided formal technique that can help characterize temporal properties of the system, or guide the search for design behaviors that do not conform to "good behavior". Specifically, we present a way to algorithmically mine temporal assertions from a Simulink model. The input to our algorithm is a requirement template expressed in Parametric Signal Temporal Logic -- a formalism to express temporal formulas in which concrete signal or time values are replaced by parameters. Our algorithm is an instance of counterexample-guided inductive synthesis: an intermediate candidate requirement is synthesized from simulation traces of the system, which is refined using counterexamples to the candidate obtained with the help of a falsification tool. The algorithm terminates when no counterexample is found. Mining has many usage scenarios: mined requirements can be used to validate future modifications of the model, they can be used to enhance understanding of legacy models, and can also guide the process of bug-finding through simulations. (Joint work with X. Jin, J. Deshmuck and S. Seshia).

Bio: Alexandre Donzé is a post-doctoral faculty at the department of Electrical Engineering and Computer Science of UC Berkeley, working under the supervision of Sanjit Seshia. He holds a PhD of Mathematics and Computer Science from the University of Joseph Fourier at Grenoble. From 2008 to 2009, he was also a postdoc at the university of Carnegie Mellon, working with Edmund Clarke and Bruce Krogh. The main goal of his research is to develop mathematical and computational tools for the analysis and the design of dynamical systems arising from different domains, in particular embedded systems (or software interacting with a physical environment), analog and mixed signal circuits and biological systems. He developed Breach, a toolbox that is today getting the attention of large groups in the automative industry such as Toyata and Bosh, and (modestly) contributed to Spaceex, the current leading model-checking tool for hybrid systems.


Logic of Hybrid Games

May 20, 2013, 4.10-5pm, Andre Platzer, Carnegie Mellon University, USA.

Abstract: Hybrid systems model cyber-physical systems as dynamical systems with interacting discrete transitions and continuous evolutions along differential equations. They arise frequently in many application domains, including aviation, automotive, railway, and robotics. We study hybrid games, i.e. games on hybrid systems combining discrete and continuous dynamics. Unlike hybrid systems, hybrid games allow choices in the system dynamics to be resolved adversarially by different players with different objectives.

This talk describes how logic and formal verification can be lifted to hybrid games. The talk describes a logic for hybrid systems called differential game logic dGL. The logic dGL can be used to study the existence of winning strategies for hybrid games. We present a simple sound and complete axiomatization of dGL relative to the fixpoint logic of differential equations. We prove hybrid games to be determined and their winning regions to require higher closure ordinals and we identify separating axioms, i.e. axioms that distinguish hybrid games from hybrid systems.

Bio: Andre Platzer is an Assistant Professor in the Computer Science Department at Carnegie Mellon University. He is working on logic and verification for cyber-physical systems and led the development of the KeYmaera prover for hybrid systems. Andre Platzer has a Ph.D. from the University of Oldenburg, Germany, and received an ACM Doctoral Dissertation Honorable Mention and NSF CAREER award, and was named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI's 10 to Watch by the IEEE Intelligent Systems Magazine.


Co-design of Control Algorithm and Embedded Platform for Building HVAC Systems

May 21, 2013, 4.10-5pm, Mehdi Maasoumy, EECS, UC Berkeley.

[Slides]

Abstract: The design of heating, ventilation and air conditioning (HVAC) systems is crucial for reducing energy consumption in buildings. As complex cyber-physical systems, HVAC systems involve three closely-related subsystems � the control algorithm, the physical building and environment and the embedded implementation platform. In the traditional topdown approach, the control algorithm and the embedded platform are in general designed separately leading to sub-optimal systems. We propose a co-design approach that analyzes the interaction between the control algorithm and the embedded platform through a set of interface variables. We present six control algorithms that take into account the sensing error, and model the relation of control performance and cost versus sensing error. We also capture the relation of embedded platform cost versus sensing error by analysis of the collected data from a testbed. Based on these models, we explore the co-design of the control algorithm and the temperature sensing subsystem of the embedded platform to optimize with respect to energy cost and monetary cost while satisfying the constraints for user comfort level.

Bio: Mehdi Maasoumy is a PhD candidate in Mechanical Engineering at University of California, Berkeley. His research interests include modeling and optimal control of linear, nonlinear, and hybrid systems with applications in energy efficient buildings, smart grid, and aircraft electric power systems. He has a M.Sc. degree from University of California, Berkeley and a B.Sc. degree from Sharif University of technology, both in Mechanical Engineering. He is the recipient of the Best Paper Award at the 4th ACM/IEEE international conference on cyber-physical systems and Outstanding Preliminary Examination Award from UC Berkeley. More information on his work is available at: http://eecs.berkeley.edu/~maasoumy/


Schedulability, Deadlock Freedom, and Performance Analysis of Timed Actors

June 4, 2013, 4.10-5pm, Marjan Sirjani, Reykjavik University, Iceland.

[Slides]

Abstract: Asynchronous event-based systems can be modeled using Timed Rebeca with least effort. Network and computational delays, periodic events, and required deadlines can be expressed in the model. The language is actor-based and the syntax is Java-like. Model checking and simulation tools are built based on the formal semantics of the language given as SOS rules. For deadlock-freedom and schedulability analysis a so-called floating-time transition system is introduced which reduced the state space by exploiting the isolation of method execution in the model. For performance analysis, a simulation tool is developed which uses McErlang as the backend. Simulation traces are saved in SQL database for further analysis. Case studies are used to show the applicability of the language and the tools.

Bio: Marjan Sirjani is an Associate Professor at Reykjavik University. Marjan and her research group are pioneers in building model checking tools, compositional verification theories, and state-space reduction techniques for actor-based models. She has been working on analyzing actors since 2001 using the modeling language Rebeca (www.rebeca-lang.org). She has also worked on coordination languages and is the co-author of the paper proposing Constraint Automata as the first compositional formal semantics for the coordination language Reo. Marjan has been the PC member and PC chair of several international conferences including Coordination, FM, FMICS, ICFEM, FSEN, and guest editor for special issues of the journals Science of Computer Programming and Fundamenta Informaticae. Before joining academia as a full-time faculty she has been the managing director of Behin System Company for more than ten years, developing software and providing system services. Marjan served as the head of the Software Engineering Department of School of Electrical and Computer Engineering at the University of Tehran prior to joining the School of Computer Science at Reykjavik University in 2008.

Fall 2013 Seminars

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Fall 2013

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays from 4.10-5.00 p.m. in 521 Cory Hall (Hogan Room).

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Armin Wasicek.

Seminars from previous semesters can be found here.

NOTE: The link to the survey is here. Results will be posted after Dec 17. Thanks for participating!

Schedule

Dan Holcomb August 27, 2013
Chung-Wei Lin September 10, 2013
Christos Stergiou September 16, 2013
Armin Wasicek September 24, 2013
Alvaro Cardenas October 04, 2013
John Rushby October 08, 2013
Valerie Issarny October 15, 2013
Michael Tiller October 22, 2013
David Kleidermacher October 29, 2013
Wenchao Li November 12, 2013
Jorge Ortiz November 19, 2013
Alberto Puggelli November 26, 2013
Philipp Hennig December 03, 2013
Mehdi Maasoumy December 10, 2013

Formal Verification and Synthesis for Quality-of-Service in On-Chip Networks

Aug 27, 2013, 4.10-5pm, Dan Holcomb, UC Berkeley.

Slides

Abstract

Quality-of-Service (QoS) in on-chip communication networks has a tremendous impact on overall system performance in today's era of ever-increasing core counts. Yet, Networks-on-Chip (NoCs) are still designed and analyzed using RTL simulation or highly abstracted analytical models. Formal methods, used successfully for many other verification tasks, have traditionally not found use in QoS verification due to the scale of the problems of interest, such as verifying latency bounds of hundreds of cycles for large on-chip networks.

In this dissertation talk, I will present recent work toward leveraging formal methods for NoC synthesis and QoS verification. In particular, we address the problems of (1) optimal buffer sizing using a counterexample-guided approach, (2) using abstraction to verify end-to-end latency bounds in a mesh network, and (3) scalable latency verification using compositional inductive proofs.

Bio:

Dan Holcomb is a PhD candidate at UC Berkeley, advised by Professor Sanjit Seshia. His dissertation work has been developed with input received during two internships at Intel Strategic CAD Lab. Dan holds a BS and MS degree from UMass Amherst.


Cyber-Security for Controller Area Network and its Security-Aware Mapping

Sep 10, 2013, 4.10-5pm, Chung-Wei Lin, UC Berkeley.

Slides

Abstract

Cyber-security is a rising issue for automotive electronic systems, and it is critical to system safety and dependability. Current in-vehicles architectures, such as those based on the Controller Area Network (CAN), do not provide direct support for secure communications. In this work, we propose a security mechanism to help prevent cyber-attacks (masquerade and replay) in vehicles with architectures based on CAN. Then, when retrofitting those architectures with security mechanisms, a major challenge is to ensure that system safety will not be hindered, given the limited computation and communication resources. Therefore, we propose an optimal Mixed Integer Linear Programming (MILP) formulation for solving the mapping problem to meet both the security and the safety requirements. To the best of our knowledge, this is the first work to address security and safety in an integrated formulation in the design automation of automotive electronic systems. Experimental results of an industrial case study show the effectiveness of our approach.

Bio:

Chung-Wei Lin received the B.S. degree in computer science and the M.S. degree in electronics engineering from the National Taiwan University. He is currently a 5th-year Ph.D. student in the EECS Department, UC Berkeley. He is in the group of Professor Alberto Sangiovanni-Vincentelli. His research interest includes embedded systems and computer-aided design. He was a summer intern of General Motors in 2011, 2012, and 2013.


Schedulability and Verification of Real-Time Discrete-Event Systems

Sep 16, 2013, 10-11am, Christos Stergiou, UC Berkeley.

Slides

Abstract

Cyber-physical systems are systems where there is a tight interaction between the computing world and the physical world. In spite of the significance of time in the dynamics of the physical world, real-time embedded software today is commonly built using programming abstractions with little or no temporal semantics. PTIDES (Programming Temporally Integrated Distributed Embedded Systems) is a programming model whose goal is to address this problem. It proposes a model-based design approach for the programming of distributed real-time embedded systems, in which the timing of real-time operations is specified as part of the model. This is accomplished by using as an underlying formalism a discrete-event (DE) model of computation, which is extended with real-time semantics.

We address the schedulability question for PTIDES programs and uniprocessor platforms. A PTIDES program is schedulable if for all legal sensor inputs, there exists a scheduling of the PTIDES components that meets all the specified deadlines. The timing specifications allowed in the discrete-event formalism can be seen as a generalization of end-to-end latencies which are usually studied in the hard real-time computing literature. This results in a rather idiosyncratic schedulability problem. We first show that for a large subset of discrete-event models, the earliest-deadline-first scheduling policy is optimal. Second, we show that all but a finite part of the infinite state space of a PTIDES execution results in demonstrably unschedulable behaviors. As a result the schedulability problem can be reduced to a finite-state reachability problem. We describe this reduction using timed automata.

We next turn to the verification problem for DE systems themselves. We study a basic deterministic DE model, where actors are simple constant-delay components, and two extensions of it: first, one where actors are not constant but introduce non-deterministic delays, and second, one where actors are either deterministic delays or are specified using timed automata. We investigate verification questions on DE models and examine expressiveness relationships between the DE models and timed automata.

Finally, we discuss extensions of this work to cover a wider range of discrete-event actors, and propose an approach to design more efficient and practical analyses for schedulability testing.

Bio:

Christos Stergiou is a computer science PhD candidate at UC Berkeley, advised by Edward A. Lee and Koushik Sen. His research interests are in verification and synthesis of distributed and real-time systems. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens.


The ACROSS MPSoC: A New Generation of Multi-Core Processors Designed for Safety-Critical Embedded Systems

Sep 24, 2013, 4.10-5pm, Armin Wasicek, UC Berkeley, Vienna University of Technology, Austria.

Slides

Abstract

The ACROSS MPSoC represent the latest milestone in a research line to develop a dependable Multi-Core Processor platform that can host mixed-criticality applications with hard-real time constraints. The technical key innovation is a time-triggered Network-on-a-Chip (TTNoC), which provides temporal determinism and temporal and spatial partitioning. The ACROSS MPSoC supports applications up to the highest criticality classes, and facilitates efficient mixed-criticality integration and certification. In this talk, we will present the guiding design principles of the TTNoC that are inherited from the Time-Triggered Architecture (TTA). Next, we will show how these core concepts solve some of the issues that current multi-core architectures suffer in the real-time domain.

Bio:

Armin Wasicek is a Postdoctoral Researcher at the EECS Department at UC Berkeley. His research interests include Real-Time Systems, Model-based Design, the Time-Triggered Architecture, and Security. Armin has a received a Doctoral degree in 2012 and a Master's degree in 2006, bot from Vienna University of Technology. His dissertation was on "Security in Time-Triggered Systems" with Prof. Hermann Kopetz as a supervisor. In 2010, Armin received a price for "Innovation in Teaching" from TU Vienna for the lab course "Embedded Systems Engineering". In 2012, Armin was awarded a Marie Curie IOF Fellowship.

From 2010-2012, Armin worked in the ACROSS project as work package leader for WP2 "Middleware and System Components". Previously, he was a member of the RTCA SC-216 on aeronautical system security and he contributed to many national and international research projects, e.g., GENESYS, DECOS, TTSoC.


Short and Long-Term Research Challenges for Protecting Critical Infrastructure Systems

Oct 04, 2013, 2-3pm, Alvaro Cardenas, University of Texas at Dallas.

Slides

ROOM: 540 Cory Hall

Round Table

We have several lines of research for securing our critical infrastructures. One line of research tries to expand traditional security mechanisms with control-theoretic algorithms to prevent or detect and respond to attacks targeting the manipulation of sensor and actuators of control systems. Another line of research explores the use of Big Data analytics for anomaly detection in smart grid networks. We are also working on risk-assessment and incentives for asset owners to invest in protecting the smart grid and other critical infrastructures. Finally, big data in the smart grid, also raises the issue of consumer privacy, as consumer behavior will be available at unprecedented levels of granularity. We are working on privacy-preserving data collection mechanisms.

Bio:

Alvaro A. Cardenas is an Assistant Professor at the University of Texas at Dallas. He holds M.S. and Ph.D. degrees from the University of Maryland, College Park under the advise of John Baras, and a B.S. from Universidad de los Andes.

Prior to joining UT Dallas he was a research staff with Fujitsu Laboratories of America in Sunnyvale, CA, where he did research in Trusted Computing, Big Data, and Smart Grids; and prior to Fujitsu, he was a postdoctoral scholar at the University of California at Berkeley under the advise of Shankar Sastry and Doug Tygar, doing research in cyber-physical systems security and applications of Machine Learning to Intrusion Detection.

He has also been an invited visiting professor at the University of Cagliari in Italy where he taught a class on machine learning and game theory for security, an Intern working on discriminatory Bayesian networks at INRIA-LORIA in France, and a SCADA intern, working on ladder logic to replace old relay boxes in Occidental Petroleum Corporation in Cano Limon, Cobenas, Colombia.

He has received numerous awards for his research including a best paper award from the U.S. Army Research Office, a best presentation award from the IEEE, a graduate school fellowship from the University of Maryland, and a Distinguished Assistantship from the Institute of Systems Research


The Challenge of High-Assurance Software

Oct 08, 2013, 4.10-5pm, John Rushby, SRI International, Computer Science Laboratory.

Slides

Abstract

It is difficult to build complex systems that (almost) never go (badly) wrong, yet this is what we expect of airplanes and pacemakers and the phone system. In essence, we have to anticipate everything that could fail or go wrong, develop countermeasures, and then provide compelling evidence that we have done all this correctly.

I will outline some of the intellectual challenges in construction of suitable evidence, particularly as applied to software. I introduce the idea of "possibly perfect" software and its associated "probability of perfection" and describe how this relates to correctness and reliability. I will then show how these ideas can be used to provide a credible basis for certification.

I will also describe epistemic and logic uncertainties in high-assurance software and speculate on the relation between these and the notion of resilience.

Much of this material is based on Joint work with Bev Littlewood and others at City University UK. Some of it is described in a paper available here.

Bio:

Dr. John Rushby is a Program Director and SRI Fellow with the Computer Science Laboratory of SRI International in Menlo Park California, where he leads its research program in formal methods and dependable systems.

He joined SRI in 1983 and served as director of its Computer Science Laboratory from 1986 to 1990. Prior to that, he held academic positions at the Universities of Manchester and Newcastle upon Tyne in England. He received BSc and PhD degrees in computing science from the University of Newcastle upon Tyne in 1971 and 1977, respectively.

His research interests center on the use of formal methods for problems in the design and assurance of safe, secure, and dependable systems.

John Rushby received the IEEE Harlan D Mills Award in 2011 "for practical and fundamental contributions to Software & Hardware Reliability with seminal contributions to computer security, fault tolerance, and formal methods" and, together with Sam Owre and N. Shankar, the CAV Award in 2012 "for developing PVS (Prototype Verification System) which, due to its early emphasis on integrated decision procedures and user friendliness, significantly accelerated the application of proof assistants to real-world verification problems."


Emergent Middleware Facing the Interoperability Challenge

Oct 15, 2013, 4.10-5pm, Valerie Issarny, INRIA Paris - Rocquencourt, France.

Slides

Abstract

Interoperability is the ability for two systems to exchange, understand and use each other's data, and is a long-standing problem in the field of distributed systems. However, the emergence of pervasive computing and the Internet of Things have brought about new challenges to achieving universal interoperability. Extreme heterogeneity and spontaneous interactions are characteristics of today's complex distributed systems.

To face the above challenges, we have introduced the notion of Emergent Middleware as part of the European CONNECT project. Emergent middleware ensures interoperation between two networked systems by combining message interoperability, i.e., the ability to interpret messages from/toward networked systems and behavioral interoperability, i.e., the ability to mediate the interaction protocols run by the communicating networked systems, under specified non-functional properties, e.g., reliability, performance and security. On-the-fly production of emergent middleware then relies on the following key enablers within the network: discovery of networked systems, learning of interaction protocols and synthesis of protocol mediators.

In this talk, after an overview of the Emergent middleware principles, I will detail the proposed solution to automated mediator synthesis, which is based on ontology reasoning and constraint programming in order to infer mappings between components' interfaces automatically. These mappings guarantee semantic compatibility between the operations and data of the interfaces. Then, we analyse the behaviors of components in order to synthesise, if possible, a mediator that coordinates the computed mappings so as to make the components interact properly. Our approach is formally-grounded to ensure the correctness of the synthesized mediator. We demonstrate the validity of our approach by implementing the MICS (Mediator synthesIs to Connect componentS) tool and experimenting it with various real-world case studies.

Bio:

Valerie Issarny is "Directrice de recherche" at Inria where she heads the ARLES Inria research project-team at the Paris-Rocquencourt center. Her research interests relate to distributed systems, software architectures, pervasive computing/ambient intelligence systems and middleware. Her research is currently focused on the study of middleware-based systems for the mobile Internet of Things, with the study of scalable and interoperability solutions for the target systems. Valerie is further the current scientific manager of the Inria@SiliconValley program that aims at strengthening research collaborations between Inria and Californian Universities.


Model-Based System Engineering: Past, Present and Future

Oct 22, 2013, 4.10-5pm, Michael Tiller, Xogeny.

Slides

Abstract

Model-based System Engineering has been recognized, for some time, as a way for companies to improve their product development processes. However, change takes time in engineering and we still have only scratched the surface of what is possible. New ideas and technologies are constantly emerging that can improve a model-based approach. In this talk, I will discuss some of my past experiences with model-based system engineering in the automotive industry. I'll also discuss the shifts I see from numerical approaches to more symbolic approaches and how this manifests itself in a shift from imperative representations of engineering models to more declarative ones. I'll cover some of the interesting challenges I've seen trying to model automotive systems and how I think those challenges can be overcome moving forward. Finally, I'll talk about some of the exciting possibilities I see on the horizon for modeling.

Bio:

Dr. Michael Tiller received his Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign in 1995. After graduating, he worked at Ford Motor Company in the Powertrain Research Department. His work focused on modeling of engine and transmission systems. In particular, he worked on many applications involving the Ford Hybrid Escape, the first production hybrid in North America and the first production SUV in the world. He is an author of 8 patents worldwide for his work on the Ford Hybrid Escape. Dr. Tiller left Ford in 2005 to join Emmeskay, a Michigan based engineering consulting company, as Vice-President of Modeling R&D. In 2010, Emmeskay was acquired by LMS International. In 2011, Dr. Tiller joined Dassault Syst�¨mes HQ in Paris to become Worldwide Director of Marketing for PLM-Systems. In August 2012, he started his own company, Xogeny, to help companies accelerate their model-based system engineering processes through consulting and tool development.

Since 1999, Dr. Tiller has been involved with the Modelica modeling language and has been a member of the Modelica Association board since it was formed. In 2001 he wrote the first book on Modelica, "Introduction to Physical Modeling with Modelica". He has been active in the ongoing development of the Modelica language, the Modelica Standard Library and the recent FMI standard. In October of 2012, he launched a Kickstarter campaign to fund the writing of a new Modelica book to be released under a Creative Common license.


Perspectives on the future of trustworthy mobile computing

Oct 29, 2013, 4.10-5pm, David Kleidermacher, Green Hills Software.

Slides

Abstract

This talk will discuss the gap between contemporary mobile security and what is needed to realize a world in which our mobile devices replace our wallets, credit cards, passports, home keys, medical device controllers, and voting kiosks. Topics included trusted platforms, high assurance software development, and mobile virtualization.

Bio:

David Kleidermacher is Chief Technology Officer at Green Hills Software where he is responsible for technology strategy, platform planning, and solutions design. Mr. Kleidermacher is a leading authority in systems software and security, including secure operating systems, virtualization technology, and the application of high robustness security engineering principles to solve computing infrastructure problems. Mr. Kleidermacher earned his bachelor of science in computer science from Cornell University and is a frequent speaker and writer on technology subjects, including author of "Embedded Systems Security", Elsevier 2012.


Specification Mining - New Formalisms, Algorithms and Applications

Nov 12, 2013, 4.10-5pm, Wenchao Li, UC Berkeley.

Slides

Abstract

Specification is the first and arguably the most important step for formal verification and correct-by-construction synthesis. These tasks require understanding precisely a design's intended behavior, and thus are only effective if the specification is created right. For example,much of the challenge in bug finding lies in finding the specification that mechanized tools can use to find bugs. It is extremely difficult to manually create a complete suite of good-quality formal specifications, especially given the enormous scale and complexity of designs today. Many real-world experiences indicate that poor or the lack of sufficient specifications can easily lead to misses of critical bugs, and in turn design re-spins and time-to-market slips.

This dissertation presents research that mitigates this manual and error-prone process through automation. The overarching theme is specification mining - the process of inferring likely specifications by observing a design's behaviors. In the first part of the talk, I will describe two approaches to mine specifications dynamically from simulation or execution traces. The first approach offers a simple but effective template-based remedy to the aforementioned problem. The second approach presents a novel formalism of specification mining based on the notion of sparse coding, which can learn latent structures in an unsupervised setting. I will discuss tradeoffs and synergies of the two approaches and show how the mined specifications can be used to localize bugs effectively.

In the second part of the talk, I will focus on the problem of synthesis from temporal logic specifications. While offering the attractive proposition of correct-by-construction, this synthesis approach completely relies on the user to not only specify the intended behaviors of the system but also the assumptions on the environment. Environment assumptions are especially hard to get right since they are often implicit knowledge and are seldom documented. We address this problem by learning assumptions from the counterstrategies of an unrealizable specification to systematically guide it towards realizability. We further show that, the proposed counterstrategy-guided assumption mining framework enables the automatic synthesis of a new class of semi-autonomous controllers, called human-in-the-loop (HuIL) controllers. Lastly, I will discuss two efforts on broadening the scope of specification mining with human inputs, one through gamification and crowdsourcing and the other through natural language processing.

Bio:

Wenchao Li is a Ph.D. candidate in the Department of Electrical Engineering and Computer Science (EECS) at UC Berkeley, advised by Prof. Sanjit A. Seshia. He received his B.S. in EECS and B.A. in Economics also from UC Berkeley. He is broadly interested in designing, analyzing and improving dependable computing systems. He dreams of a world of bug-free designs.


A Platform Architecture for Sensor Data Processing and Verification in Buildings

Nov 19, 2013, 4.10-5pm, Jorge Ortiz, UC Berkeley.

Slides

Abstract

Buildings consume 72% of electricity and nearly half of the total energy produced in the United States. In order to improve their performance, deep sampling and analytics must be deployed at scale. However, each building is uniquely designed and highly heterogenous -- limiting portability and preventing widespread deployment of solutions.

In this talk, we discuss a platform architecture with two core services: a file system service and a verification service. The former offers a common namespace and standard access method for building sensors/actuators, their data, and data processing jobs. It also allows applications to write portable code. The latter is a verification service that verifies various system behavioral patterns and physical relationships between sensors in order to maintain an accurate representation of the building in software. We evaluate the effectiveness of our architecture and verification techniques through several real-world deployments across 7 buildings in two countries that support over half a dozen applications.

Bio:

Jorge Ortiz is a PhD Candidate in the Computer Science division at the University of California at Berkeley, working with Professor David Culler on operating systems and networking. He has an M.S. in Computer Science from UC Berkeley and a B.S. in Electrical Engineering and Computer Science from M.I.T. His research interests include energy-efficient buildings, sensor networks, and cyber-physical systems. He spent several years at Oracle and IBM prior to his time at Berkeley and worked on nano-satellite software at Nanosatisfi while studying at Berkeley. Jorge is a member of the Software Defined Buildings (SDB) group UC Berkeley and will be joining IBM Research at the end of this year.


Polynomial-Time Verification of PCTL Properties of MDPs with Convex Uncertainties and its Application to Cyber-Physical Systems

Nov 26, 2013, 4.10-5pm, Alberto Puggelli, UC Berkeley.

Slides

Abstract

Accurate modeling of cyber-physical systems (CPS) is a notoriously challenging problem due, among other reasons, to the intrinsic limitations in representing the physical part of the system, whose behavior can only be inferred through finite resolution measurements. The problem is even exacerbated in the context of the formal verification of these system, where the generated guarantees on the system properties are only as accurate as the model itself. In this talk, we present a framework that allows the formal verification of quantitative properties of CPS while taking into account uncertainties in the modeling process. We first present the model of Convex Markov Decision Processes (CMDPs), i.e., MDPs in which transition probabilities are only known to lie in convex uncertainty sets. These convex sets can be used to formally capture the uncertainties that are present in the modeling process. Using results on strong duality for convex programs, we then present a verification algorithm for Probabilistic Computation Tree (PCTL) properties of CMDPs, and prove that it runs in time polynomial in the size of a CMDP for a rich subclass of convex uncertainty models. This result allows us to lower the previously known algorithmic complexity upper bound for PCTL properties of CMDPs from co-NP to PTIME. We conclude the talk by describing how we recently applied the proposed technique to the verification of the behavior of car drivers performing complex maneuvers. Results show that we can quantitatively evaluate the performance of the drivers depending on their attention level, i.e., whether they are attentive or distracted while driving. The proposed framework is currently being added to the public distribution of PRISM, the state-of-the-art tool for the probabilistic verification of stochastic systems.

Bio:

Alberto Puggelli received the B.Sc. degree and the double M.Sc. degree in Electrical Engineering (summa cum laude) from Polytechnic of Milan and from Polytechnic of Turin, Italy, in 2006 and 2008, respectively. In 2009, he was with ST-Ericsson as an intern analog designer. In 2009, he joined the EECS department at the University of California at Berkeley, where he is currently pursuing the Ph.D. degree. From 2011 to 2013, he hold intern positions in the R&D and product divisions at Texas Instruments (Kilby Labs) and Lion Semiconductor. Alberto is the recipient of two Best Student awards at Polytechnic of Milan, the AEIT fellowship entitled to "Isabella Sassi Bonadonna" for the year 2010, and the 2012 Margarida Jacome GSRC Best Poster Award. He also serves as a reviewer for the Transactions on Circuit and Systems II. His research interests include methodologies for the design of energy-efficient SoCs and techniques for the formal verification of nonlinear and stochastic hybrid systems.


Towards Probabilistic Numerical Methods

Dec 03, 2013, 1.10-2pm, Philipp Hennig, Max Planck Institute for Intelligent Systems, T�¼bingen, Germany.

Slides

TIME: 1-2pm

Abstract

Algorithms for optimization, quadrature, and the solution of differential equations are a mainstay of applied mathematics. During the past century they evolved into mature, separated fields. Machine learning is just an upstart compared to these sophisticated disciplines, but it offers a uniquely unified view on them: From the point of view of machine learning, numerical methods are learning methods, which infer an intractable ("latent") property (roots, integrals, ODE solutions) of a function, by combining structural assumptions (a "prior") with evaluations ("data") of the function at finitely many points. I will present some recent work on the kinds of insights this view allows, based on the specific examples of quasi-Newton methods in optimization, and the Runge-Kutta family for ODEs. I will argue that treating numerical methods as learning algorithms opens up avenues to unification of ideas, functional extensions, and performance improvements. While the connection between numerical methods and inference has been known since Poincar�©, it is only beginning to be seriously studied now, so much of the research I will present is in early stages. It opens up more, and more fundamental, questions than it answers.

Bio:


Controlling Energy-Efficient Buildings in the Context of the Smart Grid: A Cyber Physical System Approach.

Dec 10, 2013, 4.10-5pm, Mehdi Maasoumy, UC Berkeley.

Slides

Abstract

Building sector is responsible for about 40% of energy consumption, 40% of greenhouse gas emissions, and 70% of electricity use, in the US. Over 50% of the energy consumed in buildings is directly related to space heating, cooling and ventilation. Optimal control of heating, ventilation and air conditioning (HVAC) systems is crucial for reducing energy consumption in buildings.

In the first part of the talk, we present a novel Parameter Adaptive Building (PAB) model framework to update the model parameters on the fly, and reduce model uncertainty. We then present a Model Predictive Control (MPC), and a Robust Model Predictive Control (RMPC) algorithm and a methodology for selecting a controller type, i.e. RMPC, MPC, and rule based control (RBC) as a function of building model uncertainty.

In the second part of the talk, we address the �Cyber-Physical� aspect of a building HVAC system in the design flow. We present a co-design framework that analyzes the interaction between the control algorithm and the embedded platform through a set of interface variables, and demonstrate how the design space is explored to optimize the energy cost and monetary cost, while satisfying the constraints for user comfort level.

The third part of my talk will be centered on the role of Smart Buildings in the context of Smart Grid. Commercial buildings have inherent flexibility in how their HVAC systems consume electricity. We first propose a means to define and quantify the flexibility of a commercial building. We then present a contractual framework that could be used by the building operator and the utility to declare flexibility on the one side and reward structure on the other side. We also present a control mechanism for the building to decide its flexibility for the next contractual period to maximize the reward, given the contractual framework. Finally, at-scale experiments are presented to demonstrate the feasibility of the proposed algorithm.

Bio:

Mehdi Maasoumy is a PhD Candidate at the Mechanical Engineering Department of University of California, Berkeley. His research interests include modeling and model-based optimal control of linear, nonlinear and hybrid systems, with applications in Energy-Efficient Buildings, Smart Grid and Aircraft Electric Power Distribution System. He has a B.S. degree from Sharif University of Technology in 2008 and a M.S. degree from University of California, Berkeley in 2010, both in Mechanical Engineering. He is the recipient of the Best Student Paper Award at the International Conference on Cyber-Physical Systems (ICCPS) 2013, the Best Student Paper Award Finalist at the ASME Dynamics and Control System Conference (DSCC) 2013, and the Outstanding Preliminary Examination Award from University of California, Berkeley.


Spring 2014 Seminars

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Spring 2014

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays from 4.10-5.00 p.m. in 531 Cory Hall (Wang Room).

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Armin Wasicek. If you want to subscribe to our mailing list, please drop me a line.

Seminars from previous semesters can be found here.

Schedule

Wilfried Steiner January 21, 2014
Marc Pouzet February 11, 2014
Natarajan Shankar March 04, 2014
Radu Grosu March 12, 2014
Matt Gerring March 18, 2014
Thomas Watteyne April 08, 2014
Jose Echeveste April 15, 2014
Joseph Ng April 22, 2014

Deterministic Ethernet as Reliable Communication Infrastructure for Distributed Dependable Systems

Jan 21, 2014, 4.10-5pm, Wilfried Steiner, TTTech Computertechnik AG, Vienna Austria.

Slides

Abstract

Distributed dependable systems are omnipresent in our daily lives, and are becoming increasingly large and complex. As a consequence of this trend it is apparent that the correct development of such complex systems requires a sound architectural basis. In the absence of architectures we will either build systems of insufficient quality or will simply not be able to build systems beyond a certain level of complexity at all. The time-triggered architecture (TTA) is a successful example of an architecture for dependable embedded systems as it tremendously simplifies the development of distributed dependable systems.

In this talk we will discuss TTEthernet, a deterministic Ethernet version, which implements the time-triggered architecture. We will introduce how its deterministic behavior is achieved as well as how it can be leveraged by applications as for example safety-related automotive applications. Furthermore, industries in general and the automotive industry in particular often prefer standardized technologies, e.g., to ensure a multitude of product suppliers. Hence, we will also address automotive requirements in the networking domain and how these and TTEthernet influence the ongoing developments in network standardization groups (e.g., IEEE 802.1).

Bio:

Wilfried Steiner is Corporate Scientist at TTTech Computertechnik AG. He holds a degree of Doctor of Technical Sciences from the Vienna University of Technology, Austria. He is one of the core researchers of the TTEthernet technology and has been the industrial Editor of the SAE AS6802 standard. He is currently also a voting member in IEEE 802.1. He has been awarded a Marie Curie Outgoing Fellowship from 2009 to 2012 hosted by SRI International in Palo Alto. His research is focused on the development of algorithms and services that enable dependable communication in cyber-physical systems and applied formal methods.


Zelus, a synchronous language with Ordinary Differential Equations

Feb 11, 2014, 4.10-5pm, Marc Pouzet, INRIA, Paris-Rocquencourt, France.

Slides

Abstract

Zelus is a new programming language for modeling systems that mix discrete logical time and continuous time behaviors. From a user's perspective, its main originality is to extend an existing Lustre-like synchronous language with Ordinary Differential Equations (ODEs). The extension is conservative: any synchronous program expressed as data-flow equations and hierarchical automata can be composed arbitrarily with ODEs in the same source code. A dedicated type system and causality analysis ensure that all discrete changes are aligned with zero-crossing events so that no side effects or discontinuities occur during integration. Programs are statically scheduled and translated into sequential code by a sequence of source-to-source transformations and the final code is paired with an off-the-shelf numeric solver.

During the talk, I will focus on a recent work showing some scheduling issues in the Simulink compiler and present a Lustre inspired type-based causality analysis to detect instantaneous loops.

This is joint work with Albert Benveniste, Benoit Caillaud, Timothy Bourke and Bruno Pagano.

Bio:

Marc Pouzet is professor in computer science at Ecole normale superieure in Paris and leader of the INRIA Team PARKAS. His research concerns the design, semantics and implementation of programming languages for real-time systems. He has been the main developer of Lucid Synchrone, an extension of the synchronous language Lustre. Several original features (programming constructs, compilation and compile-time static analysis) are now integrated to the SCADE 6 tool developed at Esterel-Technologies and used for programming safety critical control software. Currently, he is interested in the semantics and implementation of hybrid modelers (e.g., Simulink, Modelica), the design and implementation of a synchronous language with mixed (discrete/continuous) signals, and the formal certification of a Lustre compiler.


Building Assurance Cases with the Evidential Tool Bus

Mar 04, 2014, 4.10-5pm, Natarajan Shankar, SRI Computer Science Laboratory.

Slides

Abstract

Cyber-physical systems combine physical and software components to build systems that operate in the physical world . Since these systems are often safety critical, the software has to be certified to a high level of assurance. Such an assurance case consists of claims supported by arguments based on evidence. The claims, both formal and semi-formal, can either be derived from sub-claims by means of an inference rule, or result from the application of a specific tool or service like a model checker or a test coverage analyzer. SRI's Evidential Tool Bus (ETB) is a distributed platform for defining workflows that produce and process the artifacts used in an assurance case. ETB uses Datalog as its metalanguage for defining workflows and constructing arguments. We present the architecture and semantics of ETB and describe how it is being used to support the development of assurance cases for cyber-physical systems.

Bio:

Dr. Shankar works on formal verification technologies such as interactive theorem proving (PVS), model checking (SAL), and SMT solving (Yices).


Compositionality Results for delta-Bisimulations

Mar 12, 2014, 4.10-5pm, Radu Grosu, Vienna University of Technology, Austria.

Slides

Abstract

By appealing to the small-gain theorem of one of the authors (Girard), we show that the 13-variable sodium-channel component of the IMW cardiac-cell model (Iyer-Mazhari- Winslow) can be replaced by an approximately bisimilar, 2-variable HH-type (Hodgkin-Huxley) abstraction. We show that this substitution of (approximately) equals for equals is safe in the sense that the approximation error between sodium-channel models does not get amplified by the feed-back-loop context in which it is placed. To prove this feed-back compositionality result, we exhibit quadratic-polynomial, exponentially decaying bisimulation functions between the IMW and HH-type sodium channels, and also for the IMW-based context in which these sodium-channel models are placed. These functions allow us to quantify the overall error introduced by the sodium-channel abstraction and subsequent substitution in the IMW model. To automate the computation of the bisimulation functions, we employ the SOSTOOLS optimization toolbox. Our experimental results validate our analytical findings. To the best of our knowledge, this is the first application of delta-bisimilar, feedback-assisting, compositional reasoning in biological systems.

Bio:

Radu Grosu is a Professor and Head of the Dependable-­‐Systems Group at the Faculty of Informatics of the Vienna University of Technology, and a Research Professor at the Computer Science Department of the State University of New York at Stony Brook.

His research interests include modeling, analysis and control of cyber-­‐physical and biological systems and his application focus includes green operating systems, mobile ad-­‐hoc networks, automotive systems, the Mars rover, cardiac-­‐cell networks and genetic regulatory networks.

Grosu is the recipient of the National Science Foundation Career Award, the State University of New York Research Foundation Promising Inventor Award, the ACM Service Award, and a member of the International Federation of Information Processing WG 2.2. Before receiving his appointment at the Vienna University of Technology, Grosu was an Associate Professor in the Computer Science Department of the State University of New York at Stony Brook, where he co-directed the ConcurrentSystems laboratory and co—founded the Systems Biology laboratory. Grosu earned his Dr.rer.nat. in Computer Science from the Technical University of München, and was a Research Associate in the Computer Science Departmentof the University of Pennsylvania.


The use of Ptolemy 2 and clusters for data analysis at the Diamond Synchrotron

Mar 18, 2014, 4.10-5pm, Matt Gerring, Diamond Light Source, United Kingdom.

Slides

Abstract

Diamond Light Source is a synchrotron radiation facility conducting experiments in diverse areas such as crystallography, tomography, microscopy, spectroscopy and radiography. The synchrotron machine produces high energy light and requires automated systems to execute experiments because of the extreme environment required. We will look at some of the robots operating in this environment and how data is collected at Diamond. We will also look at how data, once collected, is treated using Ptolemy 2 – based actors. The seminar will include demonstrations of the Data Analysis Workbench or DAWN which is open source software used to visualize and treat data for users. DAWN is a collaboration between Diamond Light Source, the ESRF and EMBL Grenoble.

Bio:

Matthew Gerring is a software developer and is enthusiastic about Ptolemy 2, Eclipse RCP and design patterns. He works on the DAWN product and manages the DAWN collaboration at Diamond Synchrotron near Oxford in the United Kingdom. He has been a keen Java developer for around 15 years and is a committer to several open source projects, including Eclipse Nebula and DAWN. When not glued to a screen and weather allowing, he can be found outdoors walking with the family and dogs or doing a little fly fishing.


The Internet of (Important) Things

Apr 08, 2014, 4.10-5pm, Thomas Watteyne, Dust Networks/Linear Technology.

Slides

Abstract

The products and standards developed as part of the Internet of Things (IoT) revolution allow small embedded devices to appear as regular Internet hosts, thereby becoming the "fingers of the Internet". The manufacturing sector is leading the way in adopting IoT technology, where it is being applied to energy management, building automation, and industrial process control. While most IoT solutions offer seamless integration into the Internet, many lack the reliability, security and low-power operation required by most applications. This can cause pilot deployments to exhibit poor performance and security vulnerabilities, eventually leading to an adoption rate of the IoT slower than anticipated.

To answer this situation, IoT technology adopts techniques coming from industrial networking. The networks resulting from this convergence enable data to flow over a traditional IP-based infrastructure, but exhibiting wire-like reliability, ultra-low power consumption, and the highest level of security. The resulting "Internet of Important Things" enables the true fusion of the cyber and physical worlds.

This presentation will show how the Internet of Important Things is a reality today. We will start by listing the challenges of building highly reliable and ultra low-power wireless mesh networks. We will then discuss the technologies which can answer this challenge, with a particular focus on channel hopping. We will illustrate this discussion through numerous examples taken from existing commercial products and deployments, and open-source implementations. We will end by introducing the work being done in the new IETF 6TiSCH working group, and highlight the associated open research problems.

Bio:

Thomas Watteyne is a Senior Networking Design Engineer at Linear Technology, in the Dust Networks product group, the leader in supplying low power wireless mesh networks for demanding industrial process automation applications. He serves as the co-chair of the new IETF 6TiSCH working group, which standardizes the use of IEEE802.15.4e TSCH in IPv6-enabled mesh networks. At UC Berkeley, Thomas coordinates OpenWSN, an open-source project of the Pister lab which promotes the use of fully standards-based protocol stacks in M2M applications. In 2009 and 2010, he was a post-doctoral research lead in Prof. Kristofer Pister's laboratory at UC Berkeley. Between 2005 and 2008, he was a research engineer at France Telecom, Orange Labs. He obtained his PhD in Computer Science (2008), and MSc and MEng in Telecommunications (both 2005) from INSA Lyon, France.


Antescofo, a dynamic language for real-time musician-computer interaction

Apr 15, 2014, 4.10-5pm, José Echeveste, IRCAM, Paris, France.

Slides

Abstract

This talk focuses on programing of time and interaction in Antescofo, a real-time system for performance coordination between musicians and computer processes during live music performance. To this end, Antescofo relies on artificial machine listening and a domain specific real-time programing language. It extends each paradigm through strong coupling of the two and strong emphasis on temporal semantics and behavior of the system.

The challenge in bringing human actions in the loop of computing is strongly related to temporal semantics of the language, and timeliness of live execution despite heterogeneous nature of time in the two mediums. Interaction scenarii are expressed at a symbolic level through the management of musical time (i.e. events like notes or beats in relative tempi) and of the ‘physical’ time (with relationships like succession, delay, duration, speed).

Antescofo language features will be presented through a series of real-world music examples which illustrate how to manage execution of different musical processes and their interactions with an external environment. The Antescofo approach has been validated through numerous uses of the system in live electronic performances in contemporary music repertoire by various international music ensembles.

Bio:

José Echeveste is a PhD student since September 2011 at the University of Paris 6 and IRCAM, working on the Antescofo coordination language. Prior to this he completed his masters on “Synchronous Strategies for Real-Time Music Accompaniment Systems” within ATIAM Masters at IRCAM. His research interests include interactive computer music, computer-aided composition, domain specific language for music, synchronous languages and timed automata.


Technical and Practical Aspects for Locating and Tracking Mobile Users within a Wireless Local Area Network

Apr 22, 2014, 4.10-5pm, Joseph Kee-Yin Ng, Department of Computer Science, Hong Kong Baptist University, Hong Kong, China.

Slides

Abstract

Indoor positioning technology or the technique for locating a mobile user is the base technology for indoor location-aware computing. In general, there are a large variety of location-based applications and services that rely on an accurate and stable location estimation system such as warehouse management, point-of-interest, infotainment, patient monitoring, mobile healthcare, and customer/consumer behavior analysis within an enclosed area like a shopping mall or exhibition center. Such location-based services play a crucial part in enabling e-commerce, and m-commerce, and eventually a critical part to bring the community to the era of ubiquitous/pervasive computing.

In view of the heavy usage and progressively growing coverage of WLAN within the indoor environment, and the research studies on ways to do WLAN positioning and tracking, there is a need to investigate the feasibility of using the WLAN infrastructure to locate a mobile user as well as to find an efficient and effective way to track users’ behavior and provides location-based services in libraries, museums, shopping malls, and exhibition centers, and to ensure personal safety in hospitals and elderly centers.

This talk will focus on the technical as well as the practical issues on Wi-Fi localization and compare our proposed method with some of the classical approaches in the literature. We will also discuss about how the proposed method is to be applied to an exhibition center in tracking users’ behavior.

We are entering a new era of computing – the era of ubiquitous/pervasive computing where we can retrieve any data from any device using any network at anytime and anyplace, and this is the technology that will bring us a step closer to a more secure and more convenient ubiquitous computing society.

Bio:

Prof. Joseph Kee-Yin NG received his B.Sc., M.Sc., and Ph.D. degree in Computer Science all from the University of Illinois at Urbana-Champaign. Prof. Ng joined Hong Kong Baptist University in 1993, and is a Professor and the Director of the Research Centre for Ubiquitous Computing in the Department of Computer Science. He is also the programme coordinator of the Computer Science degree programme and introduced Health Information Technology and Health Informatics into the undergraduate as well as the graduate programmes in HKBU. Prof. Ng’s current research interests include Real-Time & Embedded Systems, Multimedia Communications, and Ubiquitous/Pervasive Computing. Prof. Ng has obtained 2 Patents and published over 140 technical papers in journals, and conferences. Prof. Ng also served as Steering Chairs, Program Chairs, and General Chairs for numerous International Conferences as well as Associate Editors and members of the editorial board of International Journals.

Prof. Ng had also served as the Region 10 Coordinator for the Chapter Activities Board of the IEEE Computer Society, and was the Coordinator of the IEEE Computer Society Distinguished Visitors Program (Asia/Pacific). He is a senior member of the IEEE and has been a member of the IEEE Computer Society since 1991. Prof. Ng has been an Exco-member, General Secretary, Vice-Chair, Chair and now a past Chair and Exco-member for the IEEE, Hong Kong Section, Computer Chapter. Prof. Ng received numerous Awards and Certificate of Appreciation from IEEE, IEEE Region 10, IEEE Computer Society, and from IEEE Hong Kong Section, for his Leadership and Services to the ICT Industry. He is also a member of the IEEE Communication Society, ACM, Hong Kong Computer Society, and the Founding Member and Exco-Member & Treasurer for the Internet Society (ISOC)-Hong Kong Chapter. Prof. Ng is also a Director of the Hong Kong Internet Registration Corporation Limited (HKIRC).


Fall 2014, Spring 2015 Seminars

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Spring 2015

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays from 4.10-5.00 p.m. in 531 Cory Hall (Wang Room).

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Armin Wasicek. If you want to subscribe to our mailing list, please drop me a line.

Seminars from previous semesters can be found here.

Schedule

Michel Maharbiz September 09, 2014
Sanjit Seshia September 30, 2014
Arne Hamann October 07, 2014
Ashutosh Saxena October 07, 2014
Yi-Chin Wu October 21, 2014
Dave Parker October 28, 2014
Martin Torngren November 04, 2014
Philip Koopman November 14, 2014
Angela Schoellig December 18, 2014
Klaus Doppler January 13, 2015
Mickael Dardaillon February 03, 2015
Andrea Censi March 30, 2015
Arquimedes Martinez Canedo April 16, 2015
Mahesh Vishwanathan April 28, 2015

Neural Dust and Neural Interfaces

Sep 09, 2014, 4.10-5pm, Michel Maharbiz, UC Berkeley.

Slides

Abstract

A major technological hurdle in neuroprosthetics is the lack of an implantable neural interface system that remains viable for a lifetime. I will discuss the basics of extracellular neural recording, discuss the state of the art in cortical neural recording and introduce Neural Dust, a concept developed with Elad Alon, Jose Carmena and Jan Rabaey, which aims to develop a tetherless method to remotely record action potentials from the mammalian cortex.

Bio:

Michel M. Maharbiz is an Associate Professor with the Department of Electrical Engineering and Computer Science at the University of California, Berkeley.

He received his Ph.D. from the University of California at Berkeley under Professor Roger T. Howe (EECS) and Professor Jay D. Keasling (ChemE); his work led to the foundation of Microreactor Technologies, Inc. which was acquired in 2009 by Pall Corporation. From 2003 to 2007, Michel Maharbiz was an Assistant Professor at the University of Michigan, Ann Arbor. He is the co-founder of Tweedle Technologies, Cortera Neurotech and served as vice-president for product development at Quswami, Inc. from July 2010 to June 2011.

Prof. Maharbiz is a Bakar Fellow and was the recipient of a 2009 NSF Career Award for research into developing microfabricated interfaces for synthetic biology. His group is also known for developing the world�s first remotely radio-controlled cyborg beetles. This was named one of the top ten emerging technologies of 2009 by MIT�s Technology Review (TR10) and was in Time Magazine�s Top 50 Inventions of 2009. Dr. Maharbiz has been a GE Scholar and an Intel IMAP Fellow. Professor Maharbiz�s current research interests include building micro/nano interfaces to cells and organisms and exploring bio-derived fabrication methods. Michel�s long term goal is understanding developmental mechanisms as a way to engineer and fabricate machines.


Formal Methods for Lab-Based MOOCs: Cyber-Physical Systems and Beyond

Sep 30, 2014, 4.10-5pm, Sanjit Seshia, UC Berkeley.

Slides

Abstract

The advent of massive open online courses (MOOCs) has placed a renewed focus on the development and use of computational aids for teaching and learning.
Lab-based courses, such as those in embedded systems and robotics, are particularly challenging. In this talk, I will describe our experience in designing a MOOC version of the introductory undergraduate course on embedded systems at UC Berkeley. A key component of this MOOC was the "virtual laboratory", where students perform lab exercises using custom designed virtual lab software with built-in automatic grading and feedback based on formal methods. I will describe the design and evaluation of CPSGrader, the auto-grading system, summarize results from the MOOC offering on edX, and sketch directions for future work.

This talk describes joint work with several collaborators including Alexandre Donze, Jeff Jensen, Garvit Juniwal, and Edward Lee.

Bio:

Sanjit A. Seshia is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He received an M.S. and Ph.D. in Computer Science from Carnegie Mellon University, and a B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Bombay. His research interests are in dependable computing and computational logic, with a current focus on applying automated formal methods to problems in embedded systems, electronic design automation, computer security, and program analysis. His Ph.D. thesis work on the UCLID verifier and decision procedure helped pioneer the area of satisfiability modulo theories (SMT) and SMT-based verification. He is co-author of a widely-used textbook on embedded systems. His awards and honors include a Presidential Early Career Award for Scientists and Engineers (PECASE) from the White House, an Alfred P. Sloan Research Fellowship, and the School of Computer Science Distinguished Dissertation Award at Carnegie Mellon University.


Formal analysis of timing effects on closed-loop properties of cyber physical systems

Oct 07, 2014, 3.10-4pm, Arne Hamann, Robert Bosch GmbH, Stuttgart, Germany.

Slides

ROOM: 400 Hughes room

Abstract

The underlying theories of, both, control engineering and real-time systems engineering, assume idealized system abstractions that mutually neglect central aspects of the other discipline. Control engineering theory usually assumes jitter free sampling and insignificant (constant) input-output latencies disregarding complex real-world timing effects. Real-time systems theory, on the other hand, uses abstract performance models that neglect the functional behavior, and derives worst-case situations that have little expressiveness for control functionalities in physically dominated automotive systems. However, there is a lot to gain from a systematic co-engineering between both disciplines, increasing design efficiency and confidence.

In this talk an approach is presented that integrates state-of-the-art timing models into functional analysis. A general system model is presented and integrated into closed-loop reachability analysis using the hybrid system state space explorer SpaceEx. This enables a systematic co-engineering between both disciplines, increasing design efficiency and confidence. The approach is demonstrated based on an industrial example, the control software of an electro-mechanical braking system.

Bio:

Arne Hamann obtained his PhD in Computer Science in 2008 from the Technical University of Braunschweig, Germany. His PhD thesis was awarded the EDAA Outstanding Dissertation Award 2009 in the category �New directions in embedded system design and embedded software�. Currently, Arne Hamann is working for Bosch Corporate Research in the division of �Software-intensive Systems�. There, he acts as expert for real-time system design principles for physically dominated embedded systems. Additionally, he is in charge of an internal research project introducing novel model-centric system design methods into various business units within the Bosch Group. Arne Hamann is part of the AUTOSAR Timing Extensions Subgroup as well as the AUTOSAR Timing User Group. Additionally, he represents Bosch in the European ROS Industrial Consortium (RIC-EU). In the academic context, he currently serves as industrial advisory board member of the European COST action TACLe (Timing Analysis on Code Level) and program committee member of the EMSOFT and ECRTS conferences.


The Robo Brain Project: Learning Models for Perception, Planning and Language

Oct 07, 2014, 4.10-5pm, Ashutosh Saxena, Cornell University.

Slides

ROOM Change: 540AB DOP Center room!

Abstract

In this talk, I will present learning algorithms that start learning from large-scale unsupervised datasets, and through different forms of human feedback learn about concepts such as object affordances and basic physics. These representations and concepts are stored as a large graph in the robot brain. Through a few examples, I will show that such models are very effective in performing a variety of tasks including 3D scene labeling, human activity detection and anticipation, grasping, path planning, robot language, and so on.

Bio:

Ashutosh Saxena is an assistant professor in the Computer Science department at Cornell University. His research interests include machine learning, robotics and computer vision. He received his MS in 2006 and Ph.D. in 2009 from Stanford University, and his B.Tech. in 2004 from Indian Institute of Technology (IIT) Kanpur. He has also won best paper awards in 3DRR, RSS and IEEE ACE. He was named a co-chair of IEEE technical committee on robot learning. He was a recipient of National Talent Scholar award in India and Google Faculty award in 2011. He was named a Alred P. Sloan Fellow in 2011, named a Microsoft Faculty Fellow in 2012, received a NSF Career award in 2013, and received a Early Career Spotlight Award at RSS 2014.

He has also developed algorithms that enable robots (such as STAIR, POLAR and Kodiak) to perform household chores such as unload items from a dishwasher, place items in a fridge, arrange a disorganized house, etc. He has developed learning algorithms for perceiving environments from RGB-D sensors and infer semantic labels, object affordances for tasks such as activity detection and anticipation. Previously, Ashutosh has developed Make3D (http://make3d.cs.cornell.edu), an algorithm that converts a single photograph into a 3D model. Tens of thousands of users used this technology to convert their pictures to 3D. His work has received substantial amount of attention in popular press, including the front-page of New York Times, BBC, ABC, New Scientist Discovery Science, and Wired Magazine.


Enforcement of Opacity Security Properties Using Event Insertion

Oct 21, 2014, 4.10-5pm, Yi-Chin Wu, University of Michigan.

Slides

Abstract

Opacity is a confidentiality property that arises in the analysis of security properties in networked systems. It characterizes whether a "secret" of the system can be inferred by an intruder who knows the system model and partially observes the system behavior. The secret could for instance be a set of system states that should not be revealed to the intruder. The secret of the system is opaque if whenever the secret has occurred (i.e., the system has reached a secret state), there exists another "non-secret" behavior that is observationally equivalent to the intruder.

In this talk, I will first introduce the notions of opacity in the framework of Discrete Event Systems modeled as partially-observable and/or nondeterministic finite state automata. Then, I will focus on the control problem where we enforce opacity when the system is not opaque. Specifically, I will introduce the insertion mechanism, which places
an �insertion function� at the output of the system that changes the system�s output behavior by inserting additional observable events. Such an enforcement mechanism is applicable to systems whose dynamics cannot be modified, such as that of users querying at servers, and can provably enforce opacity. I will show the necessary and sufficient conditions for the existence of a valid insertion function that renders the system opaque. I will also show how we synthesize a valid insertion function when one exists. Finally, the talk will end with a case study that applies our results to the problem of enforcing location privacy in location based services.

Bio:

Yi-Chin Wu is currently a research fellow at the University of Michigan and a visiting scholar at UC Berkeley. She received her M.S. degree in 2011 and Ph.D. degree in 2014, both in EE: Systems at the University of Michigan. Her research interest lies at the intersection of network security, formal methods, and control engineering. Her Ph.D. thesis develops techniques for analyzing and enforcing opacity security properties in Discrete Event Systems.


Probabilistic Model Checking and Strategy Synthesis

Oct 28, 2014, 4.10-5pm, Dave Parker, University of Birmingham, UK.

Slides

Abstract

Probabilistic model checking is an automated technique to verify whether a probabilistic system, e.g., a robot operating in an unknown environment, satisfies a temporal logic property, e.g., "the minimum probability of completing the task within 15 minutes is above 0.98". Dually, we can also synthesise, from a model and a property specification, a strategy for controlling the system in order to satisfy or optimise the property.

This talk will give an overview of automated verification and strategy synthesis for probabilistic systems, and present some recent advances in this area, including: multi-objective model checking (to investigate trade-offs between several conflicting properties), extensions to stochastic game models (to model competitive or collaborative behaviour) and permissive strategy synthesis (to generate flexible and robust controllers). I will describe the tool support for these techniques, in the form of the probabilistic model checker PRISM and its extensions, and present results from the application of these techniques to examples such as robotic motion planning, dynamic power management controllers and task scheduling.

Bio:

Dave Parker is a Lecturer in Computer Science at the University of Birmingham and a Visiting Lecturer at the University of Oxford. His main research interests are in the area of formal verification, with a particular focus on the analysis of quantitative aspects such as probabilistic and real-time behaviour, and he has published over 90 papers in this area. Recent work includes efficient techniques for scalable verification (e.g. abstraction, compositionality), game-theoretic verification methods, and applications of these approaches to areas such as robotics, computer security and DNA computing. He leads development of the widely used probabilistic verification tool PRISM, regularly serves on the programme committees of international conferences (e.g. TACAS, CAV, SEFM, CONCUR, QEST) and frequently gives invited tutorials at summer-schools and workshops.


Roadmapping efforts for research, education and innovation in Cyber-Physical Systems (CPS)

Nov 04, 2014, 4.10-5pm, Martin Torngren, KTH, Stockholm, Sweden.

Slides

Abstract

The talk focuses on roadmapping efforts in the area of Cyber-Physical Systems (CPS). A characterization of CPS is first provided as a baseline, where CPS is related to concepts such as IoT, SoS and Big data. Two strands of roadmapping are then be covered.
1. Roadmapping at the European level. CyPhERS ("Cyber-Physical European Roadmap and Strategy"), a European FP7-project, aims at combining and expanding Europe's competence in Cyber-Physical Systems (CPS).
The project is developing a European strategic research and innovation agenda for CPS to ensure Europe's competitiveness. CyPhERS systematically surveyed, analyzed, and evaluated the significance of CPS for Europe. Currently comprehensive recommendations for action are developed that identify research priorities, support measures for cooperation, research funding policies, and training and standardization. The talk presents a characterization of CPS as well as the preliminary recommendations. 2. Education roadmapping. Education and training are closely related to the investigation and recommendations by CyPhERS. Dealing with the challenges of Cyber-Physical Systems also requires renewed education. An overview is provided of needs, efforts and guidelines for educating the CPS engineer of tomorrow. Implications for Curriculum design are presented and discussed.

Bio:

Martin Törngren has been a Professor in Embedded Control Systems at the Mechatronics division of the KTH Department of Machine Design since 2002. He has particular interest in Cyber-Physical Systems, model based engineering, architectural design, systems integration, and co-design of control applications and embedded systems. He has authored/co-authored more than 100 peer reviewed publications, and also been in charge of developing and leading graduate and continued education courses. He spent time as a post-doc at the EU-JRC, and recently returned from a 10 month sabbatical (2011/12) at Berkeley, University of California. In 1996 he co-founded the company Fengco Real-time Control AB, specializing in advanced tools for developers of embedded control systems and related consultancy. In 1994 he received the SAAB-Scania award for qualified contributions in distributed control systems, and in 2004 the ITEA achievement award 2004 for contributions in the EAST-EEA project. He served as the technical coordinator of the international iFEST ARTEMIS project with 21 partners (2010-2013).


A Case Study of Toyota Unintended Acceleration and Software Safety

Nov 14, 2014, 3.10-4pm, Philip Koopman, Carnegie Mellon University.

Slides

ROOM: 540 DOP Center room

Video of talk is here.

Abstract

Investigations into potential causes of Unintended Acceleration (UA) for Toyota vehicles have made news several times in the past few years. Some blame has been placed on floor mats and sticky throttle pedals. But, a jury trial verdict found that defects in Toyota's Electronic Throttle Control System (ETCS) software and safety architecture caused a fatal mishap. This verdict was based in part on pervasive computer hardware and software issues. This talk will outline key events in the still-ongoing Toyota UA story, and pull together the technical issues that have been discovered by NASA and other experts. The results paint a picture that should inform not only future designers of safety critical software for automobiles, but also all computer-based system designers.

Bio:

Prof. Philip Koopman has served as a testifying expert witness for automotive unintended acceleration cases, including the 2013 Bookout/Schwarz v. Toyota trial. Currently he is in the Electrical and Computer Engineering Department at Carnegie Mellon University. Previously, he was a submarine officer in the US Navy, an embedded CPU architect for Harris Semiconductor, and an embedded system researcher at United Technologies. His research interests focus on software robustness testing, autonomous vehicle software safety, and Cyber-Physical System education. He has conducted more than 140 design reviews of industry software projects including: automotive systems, rail systems, chemical process controls, electrical power components, elevators, lighting systems, heating & cooling equipment, power tools, and safety critical embedded networks. His book Better Embedded System Software describes how to deal with the types of problems that tend to occur in such systems.


Improving the Performance and Autonomy of Mobile Robots by Enabling Them to Learn from Experience

Dec 18, 2014, 3.10-4pm, Angela Schoellig, University of Toronto, Canada.

Slides

Abstract

Traditionally, motion planning and control algorithms for robots have been designed based on a priori knowledge about the system and its environment (including models of the robot�s dynamics and maps of the environment). This approach has enabled successful robot operations in predictable environments. However, to achieve reliable and efficient robot operations in complex, unknown and changing environments, we must enable robots to acquire knowledge during operation and adapt their behavior accordingly.

In my talk, I will give an overview of my group's research activities on learning-based control for high-performance robot motions in unknown environments. Our learning schemes combine ideas from control theory and machine learning, and are motivated by real-world robotics applications.

I will present experimental results that show successful learning on different robot platforms: (i) a stereo-camera-equipped rover learns to traverse unknown rough terrain and (ii) flying robots learn to race through a slalom course, perform aerobatics, and dance to music.

Bio:

Angela Schoellig is an Assistant Professor at the University of Toronto Institute for Aerospace Studies (UTIAS). She conducts research at the interface of robotics, controls and learning. Her goal is to enhance the performance and autonomy of robots by enabling them to learn from past experiments and from each other. Angela has been working with aerial vehicles for the past six years and, more recently, applied her motion planning, control and learning algorithms to large, outdoor ground vehicles. You can watch her vehicles perform slalom races and flight dances at www.youtube.com/user/angelaschoe.

Angela received her Ph.D. from ETH Zurich (with Prof. Raffaello D�Andrea), and holds both an M.Sc. in Engineering Science and Mechanics from the Georgia Institute of Technology and a Masters degree in Engineering Cybernetics from the University of Stuttgart, Germany. Her Ph.D. was awarded the ETH Medal and the 2013 Dimitris N. Chorafas Foundation Award (as one of 35 world-wide). She was selected as the youngest member of the 2014 Science Leadership Program, which promotes outstanding scientists in Canada. In 2013 she was named one of "25 women in robotics you need to know about" by Robohub.org, a leading professional robotics online platform. She was finalist of the 2008 IEEE Fellowship in Robotics and Automation, which supports prospective leaders in this field. Her past research has been published in journals such as Autonomous Robots and the IEEE Robotics & Automation Magazine, and has received coverage worldwide in mainstream TV, print and online media. More information about her research is available at: www.schoellig.name.


5G, the Next Major Wireless Standard

Jan 13, 2015, 4.10-5pm, Klaus Doppler, Nokia Research.

Slides

Abstract

5G will be the next major wireless standard extending the success story of 4G/LTE. It will power 50bn+ connected devices, enable tactile internet experience and mission critical communication in autonomous vehicles. Key performance targets include peak data rates beyond 10Gbps and latency below 1ms.

In my talk I will introduce the motivation for 5G, example use cases and the main design parameters. I will present key technology enablers, recent research results, the 5G ecosystem and its expected timeline.

Bio:

Klaus Doppler is heading the Radio Communications research in Nokia LABS, part of Nokia Technologies. His team is responsible for the 3GPP LTE, WLAN and 5G research and standardization of Nokia Technologies and explores new opportunities in radio implementation. In the past he has been leading the Wireless Systems team at Nokia Research Center in Berkeley, CA which contributed to IEEE802.11ah standardization and to the establishment of a new business line in Nokia Technologies. He led and contributed to several research activities on the design and integration of novel radio concepts into wireless systems, including device-to-device communication, (cooperative) relaying and multiband operation. He received several inventor awards at Nokia. Klaus received his PhD. from Helsinki University of Technology, Finland in 2010 and his MSc. in Electrical Engineering from Graz University of Technology, Austria in 2003. He has more than 75 pending and granted patent applications and he has published 30 journal and conference publications and book chapters.


Compilation of Parametric Dataflow Applications for Software-Defined-Radio-Dedicated MPSoCs

Feb 03, 2015, 4.10-5pm, Mickael Dardaillon, Nokia Research.

Slides

Abstract

The emergence of software-defined radio follows the rapidly evolving telecommunication domain. The requirements in both performance and dynamicity has engendered software-defined-radio- dedicated MPSoCs. Specialization of these MPSoCs make them difficult to program and verify. Dataflow models of computation have been suggested as a way to mitigate this complexity. Moreover, the need for flexible yet verifiable models has led to the development of new parametric dataflow models.

In this talk, I present some results on the compilation of parametric dataflow applications targeting software-defined-radios platforms.

Bio:

Mickaël Dardaillon is currently with Nokia Technologies at Berkeley, California, working on telecommunication protocols implementation on FPGA and manycores. His current research efforts concentrates on software for heterogeneous Multi-Processor System on Chip (MPSoC), in the context of software defined radio. He is interested in embedded systems programming, compilation and hardware architecture.

He received the Engineering degree in Electronics, Signal and Image from Polytech'Orléans, France, in 2011, the Master degree in Electronics, Signal and Microsystems from Université d'Orléans, France, in 2011, and the Ph.D. degree in Computer Science from INSA Lyon, France, in 2014, with the Inria Socrate team of the CITI Laboratory, in collaboration with the CEA LETI.


Neuromorphic Perception-and-Control for Fast and Light Autonomous Aerial Robots

Mar 30, 2015, 2.10-3pm, Andrea Censi, Laboratory for Information and Decision Systems, MIT.

Slides

ROOM: 540 DOP Center room

Abstract

The agility of aerial robots is currently limited by the speed (latency and frequency) of the feedback control loop, which includes data acquisition, inference, and control. "Neuromorphic" image sensors are a new technology that will enable much faster feedback loops. The output of a neuromorphic image sensor is a stream of asynchronous "events" rather than periodic frames. Each pixel produces an event when the log-brightness changes significantly. These sensors have latency in the order of microseconds, low power consumption, high dynamic range, and illumination-invariant output. I will discuss my work in integrating event-based neuromorphic sensors in robot perception-and-control pipelines to obtain low-latency and low-power feedback loops for resources-constrained autonomous aerial robots.

References:

Brandli et al 2014

Censi et al 2013

Censi et al 2014

Censi et al 2015

Bio:

Andrea Censi is a Research Scientist and Principal Investigator with the Laboratory for Information and Decision Systems (LIDS) in the Massachusetts Institute of Technology. Previously, he was a Visiting Scientist at the AI lab at the University of Zurich. He obtained a Ph.D. in Control & Dynamical Systems from the California Institute of Technology in 2012.


Functional Modeling Compiler for System-Level Design of Industrial Cyber-Physical Systems

Apr 16, 2015, 11-12am, Arquimedes Martinez Canedo, Siemens Corporation, Princeton, NJ.

Slides

ROOM: 540 DOP Center room

Abstract

This talk presents a functional modeling-based design methodology, associated algorithms, and tools for the design of complex industrial cyber-physical systems. Rather than supporting the critical path, where most resources are spent, we preemptively target the concept design phase that determines 75% of a system�s cost. In our methodology, the marriage of systems engineering principles with high-level synthesis techniques results in a Functional Modeling Compiler capable of generating high-fidelity simulation models for the design space exploration and validation of multiple cyber-physical architectures. Using real-world automotive use-cases, we demonstrate how functional models capturing cyber-physical aspects are synthesized into high-fidelity simulation models.

Bio:

Arquimedes Canedo is a Principal Key Expert Scientist at Siemens Corporation, Corporate Technology, in Princeton, NJ.


Safety and Stability of Cyberphysical Systems

Apr 28, 2015, 4-5pm, 540 A/B Cory, Mahesh Viswanathan, University of Illinois at Urbana-Champaign.

Slides

Abstract

The widespread deployment of computing devices that manage and control physical processes in safety critical environments, has made their analysis and verification a very important problem. Since formal models that disregard the physical processes tend to be conservative and suboptimal, the most popular way to model and analyze such systems is using hybrid systems, that have finitely many control states to model discrete behavior and finitely many real valued variables that evolve continuously with time to model the interaction with the physical world. Despite considerable progress in the last couple of decades, the automated verification of cyberphysical systems remains stubbornly challenging. Safety and stability are two important classes of properties for cyberphysical systems. In this talk we will address key foundational questions arising in the verification of such properties and outline our approach based on analyzing system simulations.

Bio:

Mahesh Viswanathan obtained his bachelor's degree in computer science from the Indian Institute of Technology at Kanpur in 1995, and his doctorate from the University of Pennsylvania in 2000. He was a post-doctoral fellow at DIMACS with a joint appointment at Telcordia Technologies in 2000-01. Since Fall 2001, he has been on the faculty at the University of Illinois at Urbana-Champaign. His research interests are in the core areas of logic, automata theory, and algorithm design, with applications to the algorithmic verification of systems.


Fall 2015 Seminars

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Fall 2015

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays or Mondays from 4.10-5.00 p.m. in 540A/B Cory Hall.

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Dorsa Sadigh. If you want to subscribe to our mailing list, please drop me a line.

Seminars from previous semesters can be found here.

Schedule

Songhwai Oh September 08, 2015
Anca Dragan September 15, 2015
Markus Rabe September 22, 2015
Calin Belta September 28, 2015
Jyotirmoy Deshmukh October 05, 2015
Shuo Han October 13, 2015
Osbert Bastani October 19, 2015
Jeronimo Castrillon October 22, 2015
Albert Cheng October 23, 2015
Dejan Milutinovic October 26, 2015
Lee Pike November 02, 2015
Siddharth Garg November 10, 2015
Anil Aswani November 23, 2015
Ricardo Sanfelice November 30, 2015 UPCOMING

Robust Navigation and Tracking in Dynamic Environments

Sep 08, 2015, 4-5pm, 540 A/B Cory, Songhwai Oh, Seoul National University.

Slides

Abstract

With the recent development in robotics, we can expect that more service robots will be assisting us in the near future in places, such as offices, malls, and homes. But, for a robot to coexist with humans and operate successfully in crowded and dynamic environments, a robot must be able to act safely and harmoniously with human participants in the environment. In this talk, I will describe our recent work on robust navigation using Gaussian process regression and robust target tracking using chance-constrained optimization.

An autoregressive Gaussian process (AR-GP) motion model is developed to predict the future trajectories of pedestrians using measurements from the partially observable egocentric view of a robot. A robot is controlled in real-time based on predicted pedestrian trajectories using the proposed AR-GP motion controller. In order to make the AR-GP motion model robust against outliers and noises, a structured low-rank matrix approximation method using nuclear-norm regularized l1-norm minimization is developed to approximate kernel matrices. A leveraged non-stationary kernel function is proposed to incorporate both positive and negative training samples, in order to speed up the learning process. Using the AR-GP motion model, we have developed a robust target tracking algorithm based on chance-constrained optimization for a mobile sensor with bounded fan-shaped sensing regions, such that the tracking success probability is guaranteed and the travel distance is minimized. I will also describe experimental results from the proposed algorithms.

This talk is based on joint work with Sungjoon Choi, Eunwoo Kim, and Yoonseon Oh.

Bio:

Songhwai Oh received the B.S. (with highest honors), M.S., and Ph.D. degrees in electrical engineering and computer sciences from the University of California, Berkeley, in 1995, 2003, and 2006, respectively. He is currently an Associate Professor in the Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea. Before his Ph.D. studies, he was a Senior Software Engineer at Synopsys, Inc. and a Microprocessor Design Engineer at Intel Corporation. In 2007, he was a Postdoctoral Researcher in the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. From 2007 to 2009, he was an Assistant Professor of electrical engineering and computer science in the School of Engineering, University of California, Merced. His research interests include cyber-physical systems, robotics, computer vision, and machine learning.


Optimal Control with Physical and Cognitive State

Sep 15, 2015, 4-5pm, 540 A/B Cory, Anca Dragan, University of California, Berkeley.

Slides

Abstract:

The goal of my research is to enable robots to work with, around, and in support of people, autonomously producing behavior that reasons about both their function and their interaction with humans. I aim to develop a formal understanding of interaction that leads to algorithms which are informed by mathematical models of how humans interact with robots, enabling generalization across robot morphologies and interaction modalities.

In this talk, I will focus on one specific instance of this agenda: autonomously generating motion for coordination during human-robot collaborative manipulation. Most motion in robotics is solely functional: industrial robots move to package parts, vacuuming robots move to suck dust, and personal robots move to clean up a dirty table. This type of motion is ideal when the robot is performing a task in isolation. Collaboration, however, does not happen in isolation, and demands that we move beyond solely functional motion. In collaboration, the robot's motion has an observer, watching and interpreting the motion � inferring the robot's intent from the motion, and anticipating the robot's motion based on its intent.

My work integrates a mathematical model of these inferences into motion planning, so that the robot can generate motion that matches people's expectations and clearly conveys its intent. In doing so, I draw on action interpretation theory, Bayesian inference, constrained trajectory optimization, and interactive learning. The resulting motion not only leads to more efficient collaboration, but also increases the fluency of the interaction as defined through both objective and subjective measures.

Bio:

Anca Dragan is a new Assistant Professor in UC Berkeley's EECS Department. She completed her PhD in Robotics at Carnegie Mellon. She was born in Romania and received her B.Sc. in Computer Science from Jacobs University in Germany in 2009. Her research lies at the intersection of robotics, machine learning, and human-computer interaction: she works on algorithms that enable robots to seamlessly work with, around, and in support of people. Anca's research and her outreach activities with children have been recognized by the Intel Fellowship and by scholarships from Siebel, the Dan David Prize, and Google Anita Borg.


A Temporal Logic Approach to Information-Flow Control

Sep 22, 2015, 4-5pm, 540 A/B Cory, Markus Rabe, University of California, Berkeley.

Slides

Abstract

Information-flow control is a principled approach to prevent vulnerabilities in programs and other technical systems. In information-flow control we define information-flow properties, which are sufficient conditions for when the system is secure in a particular attack scenario, and then enforce that the given system adheres to its properties. Information-flow properties refer to how the information flows through the system and standard verification approaches seemed to be inapplicable.

In this talk I present recent work on a temporal logic approach to information-flow control. We discuss how temporal logics can be used to provide a simple formal basis for the specification and enforcement (in our case model checking) of information-flow properties. The main challenge of this approach is that the standard temporal logics are unable to express these properties. They lack the ability to relate multiple executions of a system, which is the essential feature of information-flow properties. We thus extend temporal logics by the ability to quantify over multiple executions and to relate them using boolean and temporal operators.

We show that the approach supports a wide range of previously known and also new information-flow properties with a single model checking algorithm. We demonstrate the effectiveness of the approach along case studies in hardware security.

Bio:

Markus Rabe is a new postdoc in Sanjit Seshia's group in UC Berkeley's EECS Department. He spent his PhD studies at Saarland University under the supervision of Bernd Finkbeiner. His research interests include security, formal verification, and synthesis.


Formal Methods for Dynamical Systems

Sep 28, 2015, 4-5pm, 540 A/B Cory, Calin Belta, Boston University.

Slides

Abstract

In control theory, complex models of physical processes, such as systems of differential equations, are usually checked against simple specifications, such as stability and set invariance. In formal methods, rich specifications, such as languages and formulae of temporal logics, are checked against simple models of software programs and digital circuits, such as finite transition graphs. With the development and integration of cyber physical and safety critical systems, there is an increasing need for computational tools for verification and control of complex systems from rich, temporal logic specifications. The formal verification and synthesis problems have been shown to be undecidable even for very simple classes of infinite-space continuous and hybrid systems. However, provably correct but conservative approaches, in which the satisfaction of a property by a dynamical system is implied by the satisfaction of the property by a finite over-approximation (abstraction) of the system, have received a lot of attention in recent years. The focus of this talk is on discrete-time linear systems, for which it is shown that finite abstractions can be constructed through polyhedral operations only. By using techniques from model checking and automata games, this allows for verification and control from specifications given as Linear Temporal Logic (LTL) formulae over linear predicates in the state variables. The usefulness of these computational tools is illustrated with various examples.

Bio:

Calin Belta is a Professor in the Department of Mechanical Engineering, Department of Electrical and Computer Engineering, and the Division of Systems Engineering at Boston University, where he is also affiliated with the Center for Information and Systems Engineering (CISE) and the Bioinformatics Program. His research focuses on dynamics and control theory, with particular emphasis on hybrid and cyber-physical systems, formal synthesis and verification, and applications in robotics and systems biology. Calin Belta is a Senior Member of the IEEE and an Associate Editor for the SIAM Journal on Control and Optimization (SICON) and the IEEE Transactions on Automatic Control. He received the Air Force Office of Scientific Research Young Investigator Award and the National Science Foundation CAREER Award.


Bridging the Gap between Research and Practice: Formal Methods in the Automotive Domain

Oct 05, 2015, 4-5pm, 540 A/B Cory, Jyotirmoy Deshmukh, Toyota Technical Center.

Slides

Abstract

At the heart of an automobile are its engine and powertrain, and the operation of these components is controlled by embedded software on electronic control units (ECUs). The paradigm of model-based development (MBD) has become the de facto standard for designing such control software. MBD designs of control software range from feature-level models to application-level and even entire system-level models. On the other hand, models of the plant (e.g. the engine), can range from simple physics-based models to high-fidelity models incorporating test-data. The advantage of MBD is in its ability to design, validate, and analyze the closed-loop model of the plant and the controller, often well before the actual hardware components become available. Unfortunately, even the simplest closed-loop model of an automotive powertrain subsystem is a complex cyber-physical system with highly nonlinear and hybrid dynamics, and reasoning about the correctness of such closed-loop models is a formidable task. In this talk, we introduce two challenges in reasoning about industrial-scale closed-loop control models: (1) scaling verification or bug-finding techniques to engine control software, and (2) formalisms to express correctness and performance requirements for such models. We survey some of the existing work done to address such questions, and present some promising directions for future work.

Bio:

Jyotirmoy V. Deshmukh is a senior engineer at the Toyota Technical Center in Gardena, CA, where he does research on verification and validation techniques for powertrain control software. Before joining Toyota, Jyotirmoy was a post-doctoral researcher at the University of Pennsylvania. He received his Ph.D. from the University of Texas at Austin, and his dissertation focused on verification of concurrent and sequential software libraries. Jyotirmoy�s research interests broadly include: verification and correct-by-construction design of cyberphysical systems, hybrid systems theory, automata theory, concurrency, temporal logics and formal software verification techniques.


A Theory of Privacy for Cyber-Physical Systems with Applications in Energy Systems

Oct 13, 2015, 4-5pm, 289 Cory, Shuo Han, University of Pennsylvania.

Slides

Abstract

As many cyber-physical systems start to rely on collecting user data for more efficient operation, privacy has emerged as a concern among participating users. In this talk, I will discuss two frameworks that formalize the notion of privacy (differential privacy and information-theoretic privacy) from a unified point of view based on detection theory. I will demonstrate the applications of the two frameworks in energy systems through two case studies. (i) Private distributed charging of electric vehicles: It has been shown that the (non-private) distributed charging problem can be solved using distributed gradient descent. However, the messages exchanged between the center mediator and users may be exploited to breach the privacy of users. We show that differential privacy can be preserved by introducing additive noise to the gradients. We also quantify the trade-off between the level of privacy and the loss of utility using tools from optimization theory. (ii) Private smart metering with internal energy storage: We propose a new information-theoretic metric of privacy in order to handle the privacy of events (e.g., energy usage within any given time slot). The new metric is used to analyze the privacy of a smart metering system that uses internal energy storage as a buffer to hide distinctive energy usage patterns. The results quantify how the amount of energy storage helps improve the level of privacy.

Bio:

Shuo Han is a postdoctoral researcher in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received his B.E. and M.E. in Electronic Engineering from Tsinghua University in 2003 and 2006, and his Ph.D. in Electrical Engineering from the California Institute of Technology in 2013. His current research focuses on developing rigorous frameworks for data-driven decision making that enable reliable and efficient operations of networked cyber-physical systems, including many smart city applications such as power and transportation networks. He was a finalist for the Best Student Paper Award at the 2013 American Control Conference.


An Interactive Approach to Mobile App Verification

Oct 19, 2015, 4-5pm, 540 A/B Cory, Osbert Bastani, Stanford University.

Slides

Abstract

Static information flow analysis has the potential to greatly aid human auditors in finding privacy violations and malicious software. However, challenges in scaling sound, precise static analyses mean that an auditor must settle for either imprecise results (generating many false positives, which can be time-consuming to discharge) or unsound results (which makes it easy for adversaries to hide malicious behaviors).

We propose a verification approach that relies on interaction with a human auditor to eliminate false positives. In our approach, the static analysis queries the auditor about program properties, and uses abductive inference to minimize the number of queries needed. We apply this approach to solve two problems we have encountered in practice: (i) synthesizing models for relevant library functions, and (ii) removing dead code that causes imprecision in the analysis.

Bio:

Osbert is a fourth-year Ph.D. student working with Prof. Alex Aiken. He is interested in applying inference techniques to improve program analysis. He currently works on STAMP, which uses program analysis to help a human auditor identify Android malware. He has also worked on applying ideas from program analysis to improve the quality of deep neural nets and decision forests, on Google's static analysis infrastructure, and on interactive recommender systems.


Analysis and software synthesis of KPN applications

Oct 22, 2015, 4-5pm, 400 Cory, Jeronimo Castrillon, TU Dresden.

Slides

Abstract

Programming models based on dataflow or process networks are a good match for streaming applications, common in the signal processing, multimedia and automotive domains. In such models, parallelism is expressed explicitly which makes them well-suited for programming parallel machines. Since today�s applications are no longer static, expressive programming models are needed, such as those based on Kahn Process Networks (KPNs). In these models, tasks cannot be handled as black boxes, but have to be analyzed, profiled and traced to characterize their behavior. This is especially important in the case of heterogenous platforms with many processors of multiple different types. This presentation describes a tool flow to handle KPN applications and gives insights into mapping algorithms for heterogeneous platforms.

Bio:

Jeronimo Castrillon is a professor in the Department of Computer Science at the TU Dresden, where he is also affiliated with the Center for Advancing Electronics Dresden (CfAED). He received the Electronics Engineering degree with honors from the Pontificia Bolivariana University in Colombia in 2004, the master degree from the ALaRI Institute in Switzerland in 2006 and the Ph.D. degree (Dr.-Ing.) with honors from the RWTH Aachen University in Germany in 2013. His research interests lie on methodologies, languages, tools and algorithms for programming complex computing systems.


Functional Reactive Programming for Real-Time and Cyber-Physical Systems:

Oct 23, 2015, 3-4pm, 540 A/B Cory, Albert Cheng, University of Houston, Texas.

Slides

Abstract

The use of sophisticated digital systems to control complex physical components in real-time has grown at a rapid pace. These applications range from traditional stand-alone systems to highly-networked cyber-physical systems (CPS's), spanning a diverse array of software architectures and control models. Examples include automobile adaptive braking, industrial robotic assembly, medical pacemakers, autonomous (ground, air, and sea) vehicular travel, remote surgery, physical manipulation of nano-structures, search-and-rescue, and space exploration. Since all these applications interact directly with the physical world and often have humans in the loop, we must ensure their physical safety.

Obviously, the correctness of these embedded systems and CPS's depends not only on the effects or results they produce, but also on the time at which these results are produced. For example, when the driver of a car applies the brake, the anti-lock braking controller analyzes the environment in which the controller is embedded (car speed, road surface, direction of travel) and activates the brake with the appropriate frequency within fractions of a second. Both the result (brake activation) and the time at which the result is produced are important in ensuring the safety of the car, its driver and passengers. In a CPS consisting of a multitude of vehicles and communication components with the goal to avoid collisions and reduce traffic congestions, formal safety verification and response time analysis are essential to the certification and use of such systems.

The benefits of using the functional (reactive) programming (FRP) over the imperative programming style found in languages such as C/C++ and Java for implementing embedded and real-time software are several. The functional programming paradigm allows the programmer to intuitively describe safety-critical behaviors of the system, thus lowering the chance of introducing bugs in the design phase. Its stateless nature of execution does not require the use of synchronization primitives like mutexes and semaphores, thus reducing the complexity in programming. However, accurate response time analysis of FRP-based controllers remains a largely unexplored problem. This talk will introduce a framework for accurate response time analysis, scheduling, and verification of embedded controllers implemented as FRP programs.

*Supported in part by the US National Science Foundation Awards No. 1219082 and No. 0720856.

Bio:

Albert M. K. Cheng is Professor and former interim Associate Chair of the Computer Science Department at the University of Houston (UH), Texas. His research interests center on the design, specification, modeling, scheduling, and formal verification of real-time, embedded, and cyber-physical systems, green/power/thermal-aware computing, software engineering, knowledge-based systems, and networking. He is the founding Director of the UH Real-Time Systems Laboratory. He received the B.A. with Highest Honors in Computer Science, graduating Phi Beta Kappa at age 19, the M.S. in Computer Science with a minor in Electrical Engineering at age 21, and the Ph.D. in Computer Science at age 25, all from The University of Texas at Austin, where he held a GTE Foundation Doctoral Fellowship. He has served as a technical consultant for a number of organizations, including IBM and Shell, and was also a Visiting Professor in the Departments of Computer Science at Rice University and at the City University of Hong Kong. He is a co-founder of ZapThru.com, where he is currently the Chief Strategy and Technology Director.

Dr. Cheng is the author/co-author of over 200 refereed publications in leading journals (including IEEE Transactions on Computers, IEEE Transactions on Software Engineering, and IEEE Transactions on Knowledge and Data Engineering) and top-tier conferences (such as RTSS, RTAS, ICPADS, and ISLPED), and has received numerous awards, including the U.S. National Science Foundation Research Initiation Award and the Texas Advanced Research Program Grant. He has been invited to present seminars, tutorials, panel positions, and keynotes at nearly 100 conferences, universities, and organizations. He is and has been on the technical program committees (including many program chair positions) of over 240 conferences, symposia, workshops, and editorial boards (including the IEEE Transactions on Software Engineering 1998-2003 and the IEEE Transactions on Computers 2011-present).

He has been the Guest Co-Editor of a 2013 Special Issue on Rigorous Modeling and Analysis of Cyber-Physical Systems of the IEEE Embedded Systems Letters, and is the Guest Editor of a 2014 Special Issue on Cyber-Physical Systems of SENSORS. Recently, he has been the Program Co-Chair of the 2013 IEEE International Conference on Service Oriented Computing and Applications (SOCA) and the Program Co-Chair of the System, Models and Algorithms Track of the 2014 IEEE International Conference on Embedded Software and Systems (ICESS) where he delivered an award-winning Keynote on Next-Generation Embedded Systems. Currently, Dr. Cheng is the Program Chair of the First Workshop on Declarative Programming for Real-Time and Cyber-Physical Systems (DPRTCPS) at the IEEE Real-Time Systems Symposium (RTSS), San Antonio, Texas, USA, December 1, 2015; and the Program Chair of the International Symposium on Software Engineering and Applications (SEA), Marina del Rey, California, USA, October 26-28, 2015.

Dr. Cheng is the author of the popular senior/graduate-level textbook entitled Real-Time Systems: Scheduling, Analysis, and Verification (John Wiley & Sons), 2nd printing with updates, 2005. He is a Senior Member of the IEEE; an Honorary Member of the Institute for Systems and Technologies of Information, Control and Communication (INSTICC); and a Fellow of the Institute of Physics (IOP). He is the recipient of the 2015 UH Lifetime Faculty Award for Mentoring Undergraduate Research.


Stochastic Robotics

Oct 26, 2015, 4-5pm, 540 A/B, Dejan Milutinovic, University of California, Santa Cruz.

Slides

Abstract

While stochastic processes are useful models for drifts influencing the motion of nonholonomic vehicles, they can also be used to model a family of possible trajectories in single- or multi-vehicle systems. This results into nonlinear stochastic dynamical models of these systems. Since nonlinear control problems can be difficult enough in themselves, it seems that this approach adds yet another layer of complexity. Interestingly enough, although stochastic dynamical systems are complex, they have properties that allow us to solve control problems computationally easier than in the case of their deterministic counterparts. These solutions are attractive because computed controls anticipate stochastic process uncertainties and balance the intensity of control actions with respect to the intensity of stochastic processes. Moreover, they show the potential to solve control problems with many degrees of freedom. This talk reviews my research with several examples, including single and multiple vehicles, micro-robots and cell biology applications, scaled-down robot experiments for airport ramp area aircraft maneuvers and an analysis of human arm motions.

Bio:

Dejan Milutinovic earned Dipl.-Ing (1995) and Magister's (1999) degrees in electrical engineering from the University of Belgrade, Serbia and a doctoral degree in electrical and computer engineering (2004) from Instituto Superior Tecnico, Lisbon, Portugal.

From 1995 to 2000 he worked as a research engineer in the Automation and Control Division of Mihajlo Pupin Institute, Belgrade, Serbia. His doctoral thesis was the first runner-up for the best PhD thesis of European Robotics in 2004 by EURON. He won the NRC award of the US Academies in 2008 and Hellman Fellowship in 2012.

Dr. Milutinovic is currently an associate professor in the Department of Computer Engineering, UC Santa Cruz. His research interests are in the area of modeling and control of stochastic dynamical systems applied to robotics. He is an associate editor of the ASME Journal of Dynamic Systems, Measurement, and Control.


Programming Languages for High-Assurance Air Vehicles

Nov 02, 2015, 4-5pm, 540 A/B Cory, Lee Pike, Galois INC.

Slides

Abstract

We describe the use of embedded domain-specific languages to improve programmer productivity and increase software assurance in the context of building a fully-featured autopilot for unpiloted aircraft. This work is performed for DARPA under the High-Assurance Cyber Military System (HACMS) program.

Bio:

Lee Pike manages the Cyber-Physical Systems program at Galois, Inc., a company specializing in software-intensive critical systems. He has been the PI on approximately $10 million of R&D projects funded by NASA, DARPA, AFRL, and other federal agencies. His research focuses on applying techniques from functional programming, run-time verification, and formal verification to the areas of operating systems, compilers, cryptographic systems, avionics, and control systems. Previously, he was a research scientist in the NASA Langley Formal Methods Group and has a Ph.D in Computer Science from Indiana University.


Hardware Security: An Emerging Threat Landscape and Possible Solutions

Nov 10, 2015, 4-5pm, 400 Cory, Siddharth Garg, NYU Polytechnic School of Engineering.

Slides

Abstract

For economic reasons, the design and fabrication of semiconductor ICs is increasingly outsourced. This comes at the expense of trust. An untrusted entity, for instance a malicious designer or semiconductor foundry could pirate the design, or worse, maliciously modify it to leak secret information from the chip or sabotage its functionality.

In this talk, I will present my recent work on two defense mechanisms to secure computer hardware against such attacks. The first is split manufacturing, which enables a designer to partition a design across multiple chips, fabricate each separately, and "glue" them together after fabrication. Since each foundry only sees a part of the design, its ability to infer the design intent is hindered. I will propose a quantitative notion of security for split manufacturing and explore the resulting cost-security trade-offs.

In the second part of the talk, I will discuss another defense mechanism: logic obfuscation. Previous work has proposed logic obfuscation with seemingly strong security guarantees. I will present a new and effective attack on all existing logic obfuscation techniques, and the security implications of this new attack.

Bio:

Siddharth Garg received his Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 2009, and a B.Tech. degree in Electrical Engineering from the Indian Institute of Technology Madras. He joined NYU in Fall 2014 as an Assistant Professor, and prior to that, was an Assistant Professor at the University of Waterloo from 2010-2014. His general research interests are in computer engineering, and more particularly in secure, reliable and energy-efficient computing. For his research, Siddharth has received best paper awards at the USENIX Security Symposium 2013, at the Semiconductor Research Consortium (SRC) TECHCON in 2010, and the International Symposium on Quality in Electronic Design (ISQED) in 2009. Siddharth also received the Angel G. Jordan Award from ECE department of Carnegie Mellon University for outstanding thesis contributions and service to the community.


Inverse Optimization with Noisy Data

Nov 23, 2015, 4-5pm, 540 A/B Cory, Anil Aswani, University of California, Berkeley.

Slides

Abstract

Motivated by predictive modeling problems in personalized chronic disease management, we present a framework for solving inverse optimization problems with noisy data. These are problems in which noisy measurements of minimizers to a parametric optimization problem are observed and then used to estimate the unknown parameters of the optimization problem. Though a number of approaches have been developed, these approaches require no noise in the measured data and no modeling mismatch. In this talk, we present our framework that deals with the case where the parametric optimization problem is convex, and show this framework is risk consistent (asymptotically provides best possible predictions) or estimation consistent (asymptotically estimates true parameters) under appropriate conditions. Numerically, we provide three optimization formulations to solve inverse problems using our framework, including a new approach to solving bilevel optimization problems, a new approximation algorithm, and an approach based on integer programming. We conclude with a case study on applying these approaches to real data involving personalized chronic disease management.

Bio:

Anil Aswani is an assistant professor in Industrial Engineering and Operations Research (IEOR) at the University of California, at Berkeley. He received a BS in Electrical Engineering in 2005 from the University of Michigan, in Ann Arbor, and a PhD in Electrical Engineering and Computer Sciences with Designated Emphasis in Computational and Genomic Biology in 2010 from the University of California, at Berkeley. His research focuses on designing new statistical and optimization methods that utilize big data in order to generate empirical models of human-behavior within complex systems, which can then be used for better understanding, optimization, configuration, and design of these systems.


Constructive Feedback Control Design for Hybrid Systems

Nov 30, 2015, 4-5pm, 540 A/B Cory, Ricardo Sanfelice, University of California, Santa Cruz.

Slides

Abstract

A mathematical framework to model hybrid dynamical systems with inputs and outputs will be presented. Motivated by control design challenges in robotics and power system applications, tools for the design of feedback controllers will be introduced. The focus will be on tools guaranteeing asymptotic stability of the hybrid closed-loop system with robustness to a general class of perturbations. In particular, relaxed Lyapunov sufficient conditions for stability and attractivity, as well as control Lyapunov functions (CLFs) for the design of feedback laws will be presented. It will be shown that the proposed CLF-based feedback law provides an estimate of the upper value function of the zero-sum hybrid dynamical game, where one player assigns the control and another the disturbance. The tools will be exercised in robotic and power systems applications.

Bio:

Ricardo G. Sanfelice is an Associate Professor at the Department of Computer Engineering at University of California, Santa Cruz. He received his M.S. and Ph.D. degrees from UCSB in 2004 and 2007, respectively. He held a Postdoctoral Associate position at MIT during 2007 and 2008. Prof. Sanfelice is the recipient of the 2013 SIAM Control and Systems Theory Prize, the NSF CAREER award, the Air Force Young Investigator Research Award (YIP), the 2010 IEEE Control Systems Magazine Outstanding Paper Award, and the 2012 STAR Higher Education Award for his contributions to STEM education. He is Associate Editor for Automatica. His research interests are in modeling, stability, robust control, observer design, and simulation of nonlinear and hybrid systems with applications to robotics, power systems, aerospace, and biology.


Spring 2016 Seminars

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Spring 2016

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Mondays from 4.10-5.00 p.m. in 250 Sutardja Dai Hall.

The DREAMS topics are announced to the DREAMS mailing list, which includes the chessworkshop workgroup, and the chesslocal workgroup.

In Spring 2016, DREAMS joined forces with the Control Theory Seminar and the CITRIS People and Robots Seminar (CPAR).

The videos of the talks are posted on CITRIS CPAR Spring YouTube channel.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Dorsa Sadigh. If you want to subscribe to our mailing list, please drop me a line.

Seminars from previous semesters can be found here.

Schedule

Ashish Tiwari January 11, 2016
Antonio Bicchi January 25, 2016
Stefano Ermon January 25, 2016
Matthew Mason February 01, 2016
Lotfi Zadeh February 08, 2016
Werner Damm February 22, 2016
Paul Bogdan February 29, 2016
Jonathan Sprinkle March 14, 2016
Oussama Khatib March 17, 2016
Gregory Hager March 28, 2016
Yiannis Aloimonos April 18, 2016
Mireille Broucke April 25, 2016
Majid Zamani May 02, 2016
Peter Trautman May 04, 2016 UPCOMING
Eric Horvitz May 10, 2016 UPCOMING

Verifying Hybrid Systems

Jan 11, 2016, 4-5pm, 250 SDH, Ashish Tiwari, SRI International.

Slides

Abstract

Most of the existing approaches for verification of hybrid systems are lifted versions of verification approaches that have worked well for discrete systems.

In this talk, we shall explore a few new verification techniques that are designed specifically for verifying hybrid systems. Specifically, we present abstraction techniques that abstract the dynamics, and not the state space, of the system. We also consider the option of abstracting the initial states and property, but not the state space and dynamics of the system. We will conclude by briefly discussing the larger landscape of techniques that are currently used for verifying hybrid systems.

Bio:

Ashish Tiwari is a Senior Computer Scientist in the formal methods group of the Computer Science Laboratory at SRI International. He received his B.Tech and Ph.D. degrees in Computer Science from the Indian Institute of Technology, Kanpur and the State University of New York at Stony Brook in 1995 and 2000, respectively. His research interests lie in the areas of static program analysis, formal methods, automated deduction, automated synthesis, and symbolic computation. He is especially interested in using symbolic techniques and machine learning techniques in analyzing models of cyber-physical systems and models coming from various other domains, including systems biology.


Body Languages for physical Human-Robot Interaction

Jan 25, 2016, 3-4pm, 250 SDH, Antonio Bicchi, University of Pisa.

Slides

Abstract

Modern approaches to the design of robots with increasing amounts of embodied intelligence affect human-robot interaction paradigms. The physical structure of robots is evolving from traditional rigid, heavy industrial machines into soft bodies exhibiting new levels of versatility, adaptability, safety, elasticity, dynamism and energy efficiency. New challenges and opportunities arise for the control of soft robots: for instance, carefully planning for collision avoidance may no longer be a dominating concern, being on the contrary physical interaction with the environment not only allowed, but even desirable to solve complex tasks. To address these challenges, it is often useful to look at how humans use their own bodies in similar tasks, and even in some cases have a direct dialogue between the natural and artificial counterparts.

Bio:

Antonio Bicchi is Professor of Robotics at the University of Pisa, and Senior Scientist at the Italian Institute of Technology in Genoa. He graduated from the University of Bologna in 1988 and was a postdoc scholar at M.I.T. Artificial Intelligence lab in 1988������¢���¯���¿���½���¯���¿���½1990. He teaches Control Systems and Robotics in the Department of Information Engineering (DII) of the University of Pisa, leads the Robotics group at the Research Center "E. Piaggio'' of the University of Pisa since 1990, where he was Director from 2003 to 2012. He is an Adjunct Professor at the School of Biological and Health Systems Engineering of Arizona State University since 2013.

His main research interests are in Robotics, Haptics, and Control Systems in general. He has published more than 400 papers on international journals, books, and refereed conferences. He is Editor-in-Chief of the IEEE Robotics and Automation Letters, which he started in 2015. He is Program Chair of the IEEE Int.. Conf. Robotics and Automation (ICRA'16), has co-organized and chaired the first WorldHaptics Conference (2005), and Hybrid Systems: Computation and Control (2007). He served as the President of the Italian Association or Researchers in Automatic Control (2012-2013), as Editor in Chief of the Conference Editorial Board for the IEEE Robotics and Automation Society (RAS), as Vice President for Publications (2013-2014), for Membership (2006-2007), and as Distinguished Lecturer (2004-2006) of IEEE RAS. He is the recipient of several awards and honors. In 2012, he was awarded with an individual Advanced Grant from the European Research Council for his research on human and robot hands. Antonio Bicchi is a Fellow of IEEE since 2005.


Probabilistic Inference by Hashing and Optimization

Jan 25, 2016, 4-5pm, 250 SDH, Stefano Ermon, Stanford University.

Slides

Abstract

Statistical inference in high-dimensional probabilistic models (i.e., with many variables) is one of the central problems of statistical machine learning and stochastic decision making. To date, only a handful of distinct methods have been developed, most notably (MCMC) sampling, decomposition, and variational methods. In this talk, I will introduce a fundamentally new approach based on random projections and combinatorial optimization. Our approach provides provable guarantees on accuracy, and outperforms traditional methods in a range of domains, in particular those involving combinations of probabilistic and causal dependencies (such as those coming from physical laws) among the variables. This allows for a tighter integration between inductive and deductive reasoning, and offers a range of new modeling opportunities.

Bio:

Stefano Ermon is currently an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory. He completed his PhD in computer science at Cornell in 2015. His research interests include techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Stefano has won several awards, including two Best Student Paper Awards, one Runner-Up Prize, and a McMullen Fellowship.


Haptic perception, grasping and manipulation with simple robotic grippers

Feb 01, 2016, 4-5pm, 250 SDH, Matthew Mason, Carnegie Mellon University.

Slides

Abstract

For about fifty years robotics researchers have been designing and testing robot grippers and hands. The designs vary dramatically in complexity, from a simple pair of tongs to hands with complexity approaching the human hand in some respects. The "Simple Hands" project at Carnegie Mellon seeks to demonstrate advanced manipulation capabilities with very simple hands, for example a gripper with a single motor and just a few sensors. With this system we have demonstrated grasping of objects, haptic recognition, pose estimation, change of grasp pose, and placing. Our approach uses physics models based on Newtonian mechanics and Coulomb friction, combined with machine learning techniques.

Bio:

Matthew T. Mason earned the BS, MS, and PhD degrees in Computer Science and Artificial Intelligence at MIT, finishing his PhD in 1982. Since that time he has been on the faculty at Carnegie Mellon University. He was Director of the Robotics Institute from 2004 to 2014, and is presently Professor of Robotics and Computer Science. His prior work includes force control, automated assembly planning, mechanics of pushing and grasping, automated parts orienting and feeding, and mobile robotics. He is co-author of "Robot Hands and the Mechanics of Manipulation" (MIT Press 1985), co-editor of "Robot Motion: Planning and Control" (MIT Press 1982), and author of "Mechanics of Robotic Manipulation" (MIT Press 2001). He is a Fellow of the AAAI, and a Fellow of the IEEE. He is a winner of the System Development Foundation Prize and the IEEE Robotics and Automation Society's Pioneer Award.


Stratification, target set reachability and incremental enlargement principle

Feb 08, 2016, 4-5pm, 250 SDH, Lotfi A. Zadeh, University of California, Berkeley.

Slides

Abstract

This paper presents a brief exposition of a version the concept of stratification, call it CST for short. In our approach to stratification, CST is a computational system in which the objects of computation are strata of data. Usually, the strata are nested or stacked with nested strata centering on a target set, T. CST has a potential for significant applications in planning, robotics, optimal control, pursuit, multiobjective optimization, exploration, search and other fields. Very simple, familiar examples of stratification are dictionaries, directories and catalogues. A multi-layer perceptron may be viewed as a system with a stratified structure. In spirit, CST has similarity to dynamic programing (DP), but it is much easier to understand and much easier to implement. An interesting question which relates to neuroscience is: Does the human brain employ stratification to store information? It would be natural to represent a concept such as chair, as a collection of strata with one or more strata representing a type of chair.

Underlining our approach is a model, call it FSM. FSM is a discrete-time, discrete-state dynamical system which has a finite number of states. The importance of FSM as a model derives from the fact that through the use of granulation and/or quantization almost any kind of system can be approximated to by a finite state system. A concept which plays an important role in our approach is that of target set reachability. Reachability involves moving (transitioning) FSM from a state w to a state in target state, T, in a minimum number of steps. To this end, the state space, W, is stratified through the use of what is refer as the incremental enlargement principle. It should also be noted that the concept reachability is related to the concept of accessibility in modal logic.

Bio:

LOTFI A. ZADEH is a Professor in the Graduate School, Computer Science Division, Department of EECS, University of California, Berkeley. In addition, he is serving as the Director of BISC (Berkeley Initiative in Soft Computing).

Lotfi Zadeh is an alumnus of the University of Tehran, MIT and Columbia University. From 1950 to 1959, Lotfi Zadeh was a member of the Department of Electrical Engineering, Columbia University. He joined the Department of Electrical Engineering at UC Berkeley in 1959 and served as its Chair from 1963 to 1968. During his tenure as Chair, he played a key role in changing the name of the Department from EE to EECS.

Lotfi Zadeh held visiting appointments at the Institute for Advanced Study, Princeton, NJ; MIT, Cambridge, MA; IBM Research Laboratory, San Jose, CA; AI Center, SRI International, Menlo Park, CA; and the Center for the Study of Language and Information, Stanford University.

Lotfi Zadeh is a Fellow of the IEEE, AAAS, ACM, AAAI, and IFSA. He is a member of the National Academy of Engineering and a Foreign Member of the Finnish Academy of Sciences, the Polish Academy of Sciences, Korean Academy of Science & Technology, Bulgarian Academy of Sciences, the International Academy of Systems Studies, Moscow, and the Azerbaijan National Academy of Sciences. He is a recipient of the IEEE Education Medal, the IEEE Richard W. Hamming Medal, the IEEE Medal of Honor, the ASME Rufus Oldenburger Medal, the B. Bolzano Medal of the Czech Academy of Sciences, the Kampe de Feriet Medal, the AACC Richard E. Bellman Control Heritage Award, the Grigore Moisil Prize, the Honda Prize, the Okawa Prize, the AIM Information Science Award, the IEEE-SMC J. P. Wohl Career Achievement Award, the SOFT Scientific Contribution Memorial Award of the Japan Society for Fuzzy Theory, the IEEE Millennium Medal, the ACM 2001 Allen Newell Award, the Norbert Wiener Award of the IEEE Systems, Man and Cybernetics Society, Civitate Honoris Causa by Budapest Tech (BT) Polytechnical Institution, Budapest, Hungary, the V. Kaufmann Prize, International Association for Fuzzy-Set Management and Economy (SIGEF), the Nicolaus Copernicus Medal of the Polish Academy of Sciences, the J. Keith Brimacombe IPMM Award, the Silicon Valley Engineering Hall of Fame, the Heinz Nixdorf MuseumsForum Wall of Fame, the Egleston Medal, the Franklin Institute Medal, the Medal of the Foundation by the Trust of the Foundation for the Advancement of Soft Computing, the High State Award ������¢���¯���¿���½���¯���¿���½Friendship Order������¢���¯���¿���½���¯���¿���½, from the President of the Republic of Azerbaijan, the Transdisciplinary Award and Medal of the Society for Design and Process Sciences, other awards and twenty-four honorary doctorates. He has published extensively (over 200 single-authored papers) on a wide variety of subjects relating to the conception, design and analysis of information/intelligent systems, and is serving on the editorial boards of over seventy journals.

Prior to the publication of his first paper on fuzzy sets in 1965, Lotfi Zadeh������¢���¯���¿���½���¯���¿���½s work was concerned in the main with systems analysis, decision analysis and information systems. His current research is focused on fuzzy logic, semantics of natural languages, computational theory of perceptions, computing with words, extended fuzzy logic and Z-numbers.


Formal Methods for Highly Automated Driving

Feb 22, 2016, 4-5pm, 250 SDH, Werner Damm, Carl von Ossietzky Universit����¯�¿�½������¤t Oldenburg.

Slides

Abstract

We discuss three complementary approaches jointly contributing for building safety cases for highly automated driving.

(1) Traffic Sequence Charts (TSCs), derived from Live Sequence Charts, offer a formal specification method for requirement capturing and scenario description for highly automated driving. They form the basis for adressing the following industrial needs 1. A formally defined compositional approach to generate all possible traffic environment situations from a parameterized set of atomic scenarios 2. A requirement analysis method for cooperative highly automated driving supported with methods for simulation based methods for completeness analysis and methods for proving consistency of requirements. 3. Specify conditions on the health state of the vehicle or the environmental conditions around the vehicle under which the specified service for highly automated driving is available 4. Supported by methods for virtual model-in-the loop testing, hardware-in-the loop testing, and vehicle testing of highly automated driving functions 5. Supported by methods for automatic generation of monitors for on-line supervision of assumptions and services for highly automated driving (2) Remorse free Strategies and optimality of world models We introduce the concept of remorse-free dominant strategies which allow to compare strategies for applications where winning strategies dont exist (such as for highly autonomous driving), using the intuitive concept of remorse. We call strategies ������¢���¯���¿���½���¯���¿���½remorse-free dominant������¢���¯���¿���½���¯���¿���½ if no other strategy could have done better in comparable environment situations, even if we add more observations about real-world artefacts to the world model. We can effectively test whether a model allows for remorse free dominant strategies after using methods such as predicate abstraction to reduce models to finite state models ������¢���¯���¿���½���¯���¿���½ thus such world models are optimal ������¢���¯���¿���½���¯���¿���½adding more observations does not improve the strategic capabilities. We can effectively synthese remorse-free dominant strategies in optimal world models. We can effectively compute assumptions on the environment under which such strategies are in fact winning strategies.

(3) We present a virtual testbed for Human-in-the-Loop analysis of advanced driver assistance system, which allows for co-simulation and statistical model-checking of executable driver models based on empirically validated cogntive driver models, models of ADAS, models of vehicle dynamics and models of the traffic environment, developed jointly with the DLR Institute of Transportation, supporting TSCs

Bio:

Werner Damm holds a Diploma in Computer Science and Mathematics from the University of Bonn (1976), and a PhD in Computer Science from the RWTH Aachen (1981, Best Dissertation Award) in formal semantics. In 1986 he received the Venia Legendi from the RWTH Aachen for his research in Computer Architecture. Since 1987 he is a full professor at the Carl von Ossietzky Universit����¯�¿�½������¤t Oldenburg, holding first the Chair for Computer Architecture, and since 2002, the Chair for Safety Critical Embedded Systems. He was a visiting professor at the Weizmann Institute of Sciences (1997, cooperating with A. Pnueli and D. Harel) and Uppsala University (2001, cooperating with B.Jonsson). He is the Scientific Director of the Transregional Collaborative Research Center AVACS (SFB/TR 14 Automatic Verification and Analysis of Complex Systems, www.avacs.org), and the Director of the Interdisciplinary Research Center on Critical Systems Engineering for Socio-Technical Systems (www.uni-oldenburg.de/en/cse/) . He is a member of acatech, the German National Academy of Science and Engineering.

He is a member of the Editorial Board of the Journal of Formal Methods in System Design, a member of the Steering Board of the Cyber������¢���¯���¿���½���¯���¿���½Physical Systems Week, and the Chairman of the Steering Board of the conference series on Hybrid Systems HSCC.

He is driving applied research in his roles as a member of the Board of Directors of the applied research institute OFFIS (www.offis.de), a not-for-profit association providing IT solutions for energy, health, and transportation, where he is responsible for its cross-sectorial research strategy and chairing the OFFIS R&D Division on Transportation.

He is the Chairman of the SafeTRANS association (www.safetrans-de.org), a not-for profit association with institutional membership, integrating leading companies and research institutes in the transportation domain in joint strategic projects on enhancing safety in transportation through IT-based solutions. He has been chairing the committee for the German National Roadmap for Embedded Systems and contributed in other roadmapping activities such as the acatech agenda CPS, the roadmap for industry 4.0 coordinated by the Forschungsunion, the roadmap Automotive Embedded Systems 2030 published jointly by SafeTRANS and VDA, and is currently chairing a roadmap process for Safety, Testing and Development Processes for Highly Automated Systems.

He is a co-founder and Chairman of the steering board of the European Institute for Complex Safety Critical Systems Engineering (www.eicose.eu), integrating large enterprises, SMEs and research organizations developing critical systems for aerospace, automotive, automation, rail and health applications. EICOSE has been driving the R&D strategy of the Joint Undertaking Artemis (www.artemis.org) in the areas of safety critical systems, human centered design, and hardware architectures through roadmapping and incubation of large projects implementing its roadmap. EICOSE has created a European wide innovation eco-system around the Artemis Tool Platform for Critical Systems Engineering, and has been recognized as a center of innovation excellence by the Artemis Industrial Association representing the private sector in the Joint Undertaking Artemis.

Werner Damm is a co������¢���¯���¿���½���¯���¿���½founder and member of the Board of BTC Embedded Systems AG (www.btc������¢���¯���¿���½���¯���¿���½es.org), the supplier for IBM Rational����¯�¿�½������´s Testing Solutions, and the provider of ISO 26262 compliant testing solutions for embedded automotive applications.


Embracing Complexity: A Fractal Calculus Approach to the Modeling and Optimization of Cyber-Physical Systems

Feb 29, 2016, 3-4pm, 250 SDH, Paul Bogdan, University of Southern California.

Slides

Abstract

Cyber-physical systems (CPS) constitute a new generation of networked embedded systems that interweave computation, communication and control to facilitate our interaction with the physical world. They will stand among other application domains at the foundation of novel smart healthcare systems, which monitor individual physiological process across time and enable accurate disease prediction and health assessment. However, existing approaches to their modeling and optimization ignore important mathematical characteristics (e.g., non-stationarity, fractality). To face these challenges, we embrace the complexity of biological systems: instead of skirting around their non-linear variability. We propose a statistical physics inspired approach to CPS by encapsulating the observed mathematical characteristics of cyber and physical processes via a dynamical master equation. The first part of the talk is dedicated to explaining the benefits of this new approach, which facilitates a more accurate state-space modeling of Networks-on-Chip workloads, contributes to power savings and opens new possibilities for the dynamic optimization of large-scale systems. The second part focuses on a concrete example of a mathematical model based on fractional calculus concepts, which takes into account the dynamics of blood glucose characteristics (e.g., time dependent fractal behavior) and can be used to design an artificial pancreas that regulates insulin injection. Finally, the benefits of this mathematical framework will be also demonstrated in the context of interdependent networks by elucidating the brain-muscle interdependency with applications to brain-machine-body-interfaces and the brain functional connectivity.

Bio:

Paul Bogdan is an Assistant Professor in the Ming Hsieh Department of Electrical Engineering at University of Southern California. He received his Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University. His work has been recognized with a number of distinctions, including the 2015 NSF CAREER Award, the 2012 A.G. Jordan Award from the Electrical and Computer Engineering Department, Carnegie Mellon University for outstanding Ph.D. thesis and service, the 2012 Best Paper Award from the Networks-on-Chip Symposium (NOCS), the 2012 D.O. Pederson Best Paper Award from IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, the 2012 Best Paper Award from the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), the 2013 Best Paper Award from the 18th Asia and South Pacific Design Automation Conference, and the 2009 Roberto Rocca Ph.D. Fellowship. His research interests include the theoretical foundations of cyber-physical systems, modeling and analysis of biological systems and swarms, understanding of neural and cognitive systems via new mathematical models, development of new control algorithms for dynamical systems exhibiting multi-fractal characteristics, modeling biological / molecular communication, development of fractal mean field games to model and analyze biological, social and technological system-of-systems, performance analysis and design methodologies for many core systems.


Computationally-Aware Cyber-Physical Systems

Mar 14, 2016, 4-5pm, 250 SDH, Jonathan Sprinkle, University of Arizona.

Slides

Abstract

In this talk we describe hybrid model predictive controllers that switch between two predictor functions based on the uncontrollable divergence metric. The uncontrollable divergence metric relates the computational capabilities of the model predictive controller, to the error of the system due to model mismatch of the predictor function during computation of the model predictive control solution. The contribution of the work is its ability to trade off the accuracy of a predictive solution to the time values at which solutions will arrive. The results demonstrate the approach for control of ground vehicles as well as a vertical takeoff and landing aerial vehicle. The results are from joint work with Dr. Kun Zhang and Prof. Ricardo Sanfelice.

Bio:

Jonathan Sprinkle is the Litton Industries John M. Leonis Distinguished Associate Professor of Electrical and Computer Engineering at the University of Arizona. In 2013 he received the NSF CAREER award, and in 2009, he received the UA's Ed and Joan Biggers Faculty Support Grant for work in autonomous systems. His work has an emphasis for industry impact, and he was recognized with the UA "Catapult Award" by Tech Launch Arizona in 2014, and in 2012 his team won the NSF I-Corps Best Team award. From 2003-2007 he was at the University of California, Berkeley, as a postdoctoral scholar and the Executive Director of CHESS. He graduated with the PhD and MS degrees from Vanderbilt University, and the BSEE degree from Tennessee Tech University.


The New Robotics Age: Meeting the Physical Interactivity Challenge

Mar 17, 2016, 4-5pm, 250 SDH, Oussama Khatib, Stanford University.

Slides

Abstract

The generations of robots now being developed will increasingly touch people and their lives. They will explore, work, and interact with humans in their homes, workplaces, in new production systems, and in challenging field domains. The emerging robots will provide increased operational support in mining, underwater, and in hostile and dangerous environments. While full autonomy for the performance of advanced tasks in complex environments remains challenging, strategic intervention of a human will tremendously facilitate reliable real-time robot operations. Human-robot synergy benefits from combining the experience and cognitive abilities of the human with the strength, dependability, competence, reach, and endurance of robots. Moving beyond conventional teleoperation, the new paradigm ������¢���¯���¿���½���¯���¿���½ placing the human at the highest level of task abstraction ������¢���¯���¿���½���¯���¿���½ relies on robots with the requisite physical skills for advanced task behavior capabilities. Such connecting of humans to increasingly competent robots will fuel a wide range of new robotic applications in places where they have never gone before. This discussion focuses on robot design concepts, robot control architectures, and advanced task primitives and control strategies that bring human modeling and skill understanding to the development of safe, easy-to-use, and competent robotic systems. The presentation will highlight these developments in the context of a novel underwater robot, Ocean One, called O2, developed at Stanford in collaboration with Meka-Google Robotics, and KAUST.

Bio:

Oussama Khatib received his PhD from Sup������¢���¯���¿���½���¯���¿���½Aero, Toulouse, France, in 1980. He is Professor of Computer Science at Stanford University. His research focuses on methodologies and technologies in human-centered robotics including humanoid control architectures, human motion synthesis, interactive dynamic simulation, haptics, and human-friendly robot design. He is a Fellow of IEEE. He is Co-Editor of the Springer Tracts in Advanced Robotics (STAR) series and the Springer Handbook of Robotics, which received the PROSE Award for Excellence in Physical Sciences & Mathematics. Professor Khatib is the President of the International Foundation of Robotics Research (IFRR). He has been the recipient of numerous awards, including the IEEE RAS Pioneer Award in Robotics and Automation, the IEEE RAS George Saridis Leadership Award in Robotics and Automation, the IEEE RAS Distinguished Service Award, and the Japan Robot Association (JARA) Award in Research and Development.


From Mimicry to Mastery: Creating Machines that Augment Human Skill

Mar 28, 2016, 4-5pm, 250 SDH, Gregory Hager, Johns Hopkins University.

Slides

Abstract

We are entering an era where people will interact with smart machines to enhance the physical aspects of their lives, just as smart mobile devices have revolutionized how we access and use information. Robots already provide surgeons with physical enhancements that improve their ability to cure disease, we are seeing the first generation of robots that collaborate with humans to enhance productivity in manufacturing, and a new generation of startups are looking at ways to enhance our day to day existence through automated driving and delivery.

In this talk, I will use examples from surgery and manufacturing to frame some of the broad science, technology, and commercial trends that are converging to fuel progress on human-machine collaborative systems. I will describe how surgical robots can be used to observe surgeons ������¢���¯���¿���½���¯���¿���½at work������¢���¯���¿���½���¯���¿���½ and to define a ������¢���¯���¿���½���¯���¿���½language of manipulation������¢���¯���¿���½���¯���¿���½ from data, mirroring the statistical revolution in speech processing. With these models, it is possible to recognize, assess, and intelligently augment surgeons������¢���¯���¿���½���¯���¿���½ capabilities. Beyond surgery, new advances in perception, coupled with steadily declining costs and increasing capabilities of manipulation systems, have opened up new science and commercialization opportunities around manufacturing assistants that can be instructed ������¢���¯���¿���½���¯���¿���½in-situ.������¢���¯���¿���½���¯���¿���½ Finally, I will close with some thoughts on the broader challenges still be to surmounted before we are able to create true collaborative partners.

Bio:

Gregory D. Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University. His research interests include collaborative and vision-based robotics, time-series analysis of image data, and medical applications of image analysis and robotics. He has published over 300 articles and books in these areas. Professor Hager is also Chair of the Computing Community Consortium, a board member of the Computing Research Association, and is currently a member of the governing board of the International Federation of Robotics Research. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University of Munich where he also holds an appointment in Computer Science. He is a fellow of the IEEE for his contributions to Vision-Based Robotics, and has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV. Professor Hager received his BA in Mathematics and Computer Science Summa Cum Laude at Luther College (1983), and his MS (1986) and PhD (1988) from the University of Pennsylvania. He was a Fulbright Fellow at the University of Karlsruhe, and was on the faculty of Yale University prior to joining Johns Hopkins. He is founding CEO of Clear Guide Medical.


The Syntax of action: A key to intelligent robots

Apr 18, 2016, 4-5pm, 250 SDH, Yiannis Aloimonos, University of Maryland, College Park.

Slides

Abstract

Humanoid robots will need to learn the actions that humans perform. They will need to recognize these actions when they see them and they will need to perform these actions themselves. In this presentation, it is proposed that this learning task can be achieved using the manipulation grammar. Context-free grammars have been in fashion in linguistics because they provide a simple and precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks. Similarly, for manipulation actions, every complex activity is built from smaller blocks involving hands and their movements, as well as objects, tools and the monitoring of their state. Thus, interpreting a ������¢���¯���¿���½���¯���¿���½seen������¢���¯���¿���½���¯���¿���½ action is like understanding language, and executing an action from knowledge in memory is like producing language. Several experiments will be shown interpreting human actions in the arts and crafts or assembly domain, through a parsing of the visual input, on the basis of the manipulation grammar. This parsing, in order to be realized, requires a network of visual processes that attend to objects and tools, segment them and recognize them, track the moving objects and hands, and monitor the state of objects to calculate goal completion. These processes will also be explained in a new cognitive architecture called the ������¢���¯���¿���½���¯���¿���½Cognitive Dialogue������¢���¯���¿���½���¯���¿���½ and we will conclude with demonstrations of robots learning how to perform tasks by watching videos of relevant human activities.

Bio:

Yiannis Aloimonos is Professor of Computational Vision and Intelligence at the Department of Computer Science, University of Maryland, College Park, and the Director of the Computer Vision Laboratory at the Institute for Advanced Computer Studies (UMIACS). He is also affiliated with the Institute for Systems Research and the Neural and Cognitive Science Program. He was born in Sparta, Greece and studied Mathematics in Athens and Computer Science at the University of Rochester, NY (PhD 1990). He is interested in Active Perception and the modeling of vision as an active, dynamic process for real time robotic systems. For the past five years he has been working on bridging signals and symbols, specifically on the relationship of vision to reasoning, action and language.


Reach Control Problem

Apr 25, 2016, 4-5pm, 250 SDH, Mireille Broucke, University of Toronto.

Slides

Abstract

We discuss a class of control problems for continuous time dynamical systems featuring synthesis of controllers to meet certain logic specifications. Such problems fall in the area of hybrid systems. Hybrid systems have been studied for some time; unfortunately the area has not delivered all that it promised: a comprehensive theory of control synthesis has remained elusive. Some work has been done at the high level on synthesis of controllers for logic specifications inspired by discrete event system theory. These approaches do not confront where the deeper challenge lies: a (hopefully) structural characterization of the intrinsic limits of a continuous time control system to achieve a non-equilibrium specification.

We study affine systems and logic specifications encoded as inequality constraints. Mathematically, the model is an affine system defined on a polytopic state space, and control synthesis typically yields piecewise affine controllers. By studying this special model, synthesis tools have been recoverable. The core synthesis problem has been distilled in the so-called Reach Control Problem (RCP). Roughly speaking, the problem is for an affine system xdot = A x + B u + a defined on a simplex to reach a pre-specified facet (boundary) of the simplex in finite time without first exiting the simplex. The significance of the problem stems from its capturing two essential requirements embedded in logic specifications: state constraints and trajectories reaching a goal set of states in finite-time.

In this talk I will give highlights of nearly 10 years of research on the RCP: solvability by affine feedback, continuous state feedback, time-varying affine feedback, and piecewise affine feedback; an associated Lyapunov theory; a geometric structure theory; and emerging applications.

Bio:

Mireille Broucke obtained the BSEE degree in Electrical Engineering from the University of Texas at Austin in 1984 and the MSEE and PhD degrees from the University of California, Berkeley in 1987 and 2000, respectively. She was a postdoc in Mechanical Engineering at University of California, Berkeley during 2000-2001. She has six years of industry experience in control design for the automotive and aerospace industries. During 1993-1996 she was a program manager and researcher at Partners for Advanced Transportation and Highways (PATH) at University of California, Berkeley. Since 2001 she has been at the University of Toronto where she is a professor in Electrical and Computer Engineering. Her research interests are in hybrid systems, piecewise affine control, geometric control theory, and patterned linear systems.


Automated synthesis of control systems: A double abstraction scheme

May 02, 2016, 4-5pm, 250 SDH, Majid Zamani, Technical University of Munich.

Slides

Abstract

Embedded control software plays a crucial role in many safety-critical applications: modern vehicles, for instance, use software to control steering, fuel injection, and airbag deployment. These applications are examples of cyber-physical systems (CPS), where software components interact tightly with physical systems. Although CPS have become ubiquitous in modern technology due to advances in computational devices, the development of core control software running on these systems is still ad hoc and error-prone. In this talk, I will propose a transformative design process, in which the controller code is automatically synthesized from higher-level correctness requirements. First, a compositional construction of abstractions of interconnected continuous systems is proposed. Those abstractions, themselves continuous systems, act as substitutes in the controller design process due to having possibly lower dimensions and simple interconnection topologies. Second, an automatic controller synthesis scheme is proposed by constructively deriving finite abstractions of infinite approximations of original continuous systems. The proposed automated synthesis of embedded control software holds the potential to develop complex yet reliable large-scale CPS while considerably reducing verification and validation costs.

Bio:

Majid Zamani is an assistant professor in the Department of Electrical and Computer Engineering at Technical University of Munich where he leads the Hybrid Control Systems Group. He received a Ph.D. degree in Electrical Engineering and an MA degree in Mathematics both from University of California, Los Angeles in 2012, and an M.Sc. degree in Electrical Engineering from Sharif University of Technology in 2007. From September 2012 to December 2013, he was a postdoctoral researcher in the Delft Centre for Systems and Control at Delft University of Technology. Between December 2013 and May 2014, he was an assistant professor at Delft University of Technology.


Human machine teaming architectures: probabilistic shared control and the lower bounding property

May 04, 2016, 2-3pm, 240 Bechtel, Peter Trautman, Galois INC.

Slides

Abstract

Shared control fuses operator inputs and autonomy inputs into a single command. However, if the environment or the operator exhibits Gaussian multimodality (common to search and rescue robots, teleoperated robots facing communication degradation, robotic prosthetics, and assistive driving technologies), many state of the art approaches (e.g., linear blending) are suboptimal with respect to safety, efficiency, and operator-autonomy agreement (see our recent paper at http://arxiv.org/pdf/1506.06784.pdf). To prove and rectify the above sub-optimalities, we introduced probabilistic shared control (PSC), which simultaneously optimizes autonomy objectives and operator-autonomy agreement under multimodal conditions. Importantly, because PSC optimizes operator-autonomy agreement, it exhibits stronger ������¢���¯���¿���½���¯���¿���½centaur" like properties than the state of the art: PSC naturally leverages the complementary abilities of the human and the machine.

More broadly, a disquieting paradox exists: human machine teams (in a general sense, to include both HCI and HRI) often perform worse than either the human or machine would perform alone. We thus argue for a strong but essential condition for any human machine team: team performance should never be worse than team member performance (the lower bounding property). Further, this property should be invariant to change in human capability, human modeling, autonomy capability, and environment. We consider case studies in which the lower bounding property is violated (linear blending under multimodal conditions, fully autonomous reactive path planners in human crowds) and where it is preserved (PSC using human models of varying fidelity, fully autonomous joint cooperative collision avoidance planners in human crowds).

Bio:

Pete Trautman received his B.S. in Physics and Applied Mathematics from Baylor University in 2000. He then entered the United States Air Force, serving first as an analyst at the National Air and Space Intelligence Center, and then as a program manager/researcher at the Sensors Directorate (RYAT). In 2005, he returned to graduate school at Caltech, completing his Ph.D. in Control and Dynamical Systems in 2012. His thesis research focused on robot navigation in dense human crowds, the result of which was a probabilistic model of human robot cooperation and a 6 month case study in Caltech������¢���¯���¿���½���¯���¿���½s student cafeteria. From 2012 until early 2014, Pete was a Senior Engineer at the Boeing company, where he developed navigation and localization technology for commercial aircraft assembly robots. Additionally, he served as the sensing team lead for Caltech������¢���¯���¿���½���¯���¿���½s DARPA Grand Challenge entry in 2006, has consulted for Toyon Research and Applied Minds Inc., and has conducted research in physics at the Los Alamos National Laboratory, the University of California at San Diego, and Worcester Polytechnic Institute. He was a Best Paper Finalist at ICRA 2013 for his work on autonomous crowd navigation. He now works at Galois, Inc. in Portland Oregon.


The One Hundred Year Study on Artificial Intelligence: An Enduring Study on AI and its Influence on People and Society

May 10, 2016, 4-5pm, 250 SDH, Eric Horvitz, Microsoft Research, Redmond.

Slides

Abstract

I will take the opportunity of a DREAM seminar to provide an update on the One Hundred Year Study on AI. I will describe the background and status of the project, including the roots of the effort in earlier experiences with the 2008-09 AAAI Panel on Long-Term AI Futures that culminated in the AAAI Asilomar meeting. I will reflect about several directions for investigation, highlighting opportunities for reflection and investment in proactive research, monitoring, and guidance. I look forward to comments and feedback from seminar attendees.

Bio:

Eric Horvitz is a technical fellow and director of the Microsoft Research lab at Redmond. He has been elected fellow of AAAI, ACM, and the National Academy of Engineering (NAE). He served as president of the AAAI and has served on advisory boards for the Allen Institute for Artificial Intelligence, NSF, NIH, DARPA, and the Computing Community Consortium (CCC). More information can be found at http://research.microsoft.com/~horvitz.


Fall 2016 Seminars

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Fall 2016

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Mondays from 4.10-5.00 p.m. in 250 Sutardja Dai Hall.

Similar to last year, DREAMS has joined forces with the Control Theory Seminar and the CITRIS People and Robots Seminar CPAR.

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Dorsa Sadigh. If you want to subscribe to our mailing list, please drop me a line.

Seminars from previous semesters can be found here.

Schedule

Alessandro Abate August 22, 2016
Mykel Kochenderfer August 29, 2016
Mark Mueller September 12, 2016
Joao Hespanha September 19, 2016
Tom Henzinger September 22, 2016
Meeko Oishi September 26, 2016
Arthur Krener September 29, 2016
Marilena Vendittelli October 04, 2016
Aws Albarghouthi October 10, 2016
Tom Griffiths October 17, 2016
Nancy Amato October 24, 2016
Subramanian Ramamoorthy October 26, 2016
Ian Mitchell November 07, 2016
Marta Kwiatkowska November 08, 2016
Marco Pavone November 14, 2016
Hamsa Balakrishnan November 21, 2016
Karl Johansson November 28, 2016
Yon Visell December 05, 2016

Data-driven and model-based quantitative verification and correct-by-design synthesis of CPS

Aug 22, 2016, 4-5pm, 250 SDH, Alessandro Abate, University of Oxford.

Slides

Abstract

I discuss a new and formal, measurement-driven and model-based automated verification and synthesis technique, to be applied on quantitative properties over systems with partly unknown dynamics. I focus on physical systems (with spatially continuous variables, possibly noisy), driven by external inputs and accessed under noisy measurements, and suggest that the approach can be as well generalised over CPS. I formulate this new setup as a data-driven Bayesian model inference problem, formally embedded within a quantitative, model-based verification procedure.

While emphasising the generality of the approach over a number of diverse model classes, this talk zooms in on systems represented via stochastic hybrid models (SHS), which are probabilistic models with heterogeneous dynamics (continuous/discrete, i.e. hybrid, as well as nonlinear) - as such, SHS are quite a natural framework for CPS. With focus on model-based verification procedures, I provide the characterisation of general temporal specifications based on Bellman�s dynamic programming. The computation of such properties and the synthesis of related control architectures optimising properties of interest, is attained via the development of abstraction techniques based on quantitative approximations. Theory is complemented by algorithms, all packaged in a software tool (FAUST^2) that is freely available to users.

Bio:

Alessandro Abate is an Associate Professor in the Department of Computer Science at the University of Oxford, and a Fellow of the Alan Turing Institute in London (UK). He received a Laurea in Electrical Engineering in October 2002 from the University of Padova (IT), an MS in May 2004 and a PhD in December 2007, both in Electrical Engineering and Computer Sciences, at UC Berkeley (USA). He has been an International Fellow in the CS Lab at SRI International in Menlo Park (USA), and a PostDoctoral Researcher at Stanford University (USA), in the Department of Aeronautics and Astronautics. From June 2009 to mid 2013 he has been an Assistant Professor at the Delft Center for Systems and Control, TU Delft (NL).

His research interests are in the formal verification and control of complex (probabilistic and hybrid) models, in the integration of data-based learning aspects within these deductive techniques, and in the application of complex models over a number of domains, particularly in energy and in systems biology.


Building Trust in Decision Support Systems for Aerospace

Aug 29, 2016, 4-5pm, 250 SDH, Mykel Kochenderfer, Stanford University.

Slides

Abstract

Starting in the 1970s, decades of effort went into building human-designed rules for providing automatic maneuver guidance to pilots to avoid mid-air collisions. The resulting system was later mandated worldwide on all large aircraft and significantly improved the safety of the airspace. Recent work has investigated the feasibility of using partially observable Markov decision processes (POMDPs) and advanced computational techniques to derive optimized decision logic that better handles various sources of uncertainty and balances competing system objectives. This approach has resulted in a system called Airborne Collision Avoidance System (ACAS) X that significantly reduces the risk of mid-air collision while also reducing the alert rate, and it is in the process of becoming the next international standard. Using ACAS X as a case study, this talk will discuss lessons learned about building trust in advanced decision support systems. This talk will also outline research challenges in facilitating greater levels of automation and integrating unmanned aircraft into the airspace.

Bio:

Mykel Kochenderfer is Assistant Professor of Aeronautics and Astronautics at Stanford University. Prior to joining the faculty, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance, with his early work leading to the establishment of the ACAS X program. He received a Ph.D. from the University of Edinburgh and B.S. and M.S. degrees in computer science from Stanford University. Prof. Kochenderfer is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. He is the author of "Decision Making under Uncertainty: Theory and Application" from MIT Press. He is a third generation pilot.


Multicopter dynamics and control: surviving the complete loss of multiple actuators and quickly generating trajectories

Sep 12, 2016, 4-5pm, 250 SDH, Mark Mueller, University of California, Berkeley.

Slides

Abstract

Multicopters are increasingly becoming part of our everyday lives, with current and future applications including delivery services, entertainment, and aerial sensing. These systems are expected to be safe and to have a high degree of autonomy. This talk will look at the dynamics of a multicopter, and then discuss multicopter failsafe strategies and fast trajectory generation.

The first part of the talk presents an actuator redundancy scheme for multicopters, allowing e.g. a quadcopter to maintain controlled flight despite the complete failure of half its actuators. This redundancy is a crucial part of the safety concept of a NYC Broadway musical drone performance, where eight quadcopters perform eight shows per week in front of an audience of 2'000 people, without any protective nets. The related 'monospinner' will also be presented: with only one moving part, it is potentially the world's mechanically simplest, controllable flying vehicle.
In the second part, a computationally light-weight strategy for generating quadrocopter motion primitives is presented. This trajectory generation can evaluate and compare on the order of one million motion primitives per second on a standard laptop computer. These motion primitives are designed to be fast to compute and verify (at the expense of optimality), while being flexible with respect to initial and final states. This allows to encode highly dynamic tasks with complicated end goals, such as catching a thrown ball.

Bio:

Mark W. Mueller completed his PhD studies, advised by Prof. Raffaello D'Andrea, at the Institute for Dynamic Systems and Control at the ETH Zurich at the end of 2015. He received a bachelors degree from the University of Pretoria, and a masters from the ETH Zurich in 2011, both in Mechanical Engineering. After working a brief period at a startup company, he joined the Mechanical Engineering Department at Berkeley in August of this year.


Opportunities and Challenges in Control Systems arising from Ubiquitous Communication and Computation

Sep 19, 2016, 4-5pm, 250 SDH, Joao Hespanha, University of California, Santa Barbara.

Slides

Abstract

Advances in VLSI (Very Large Scale Integration) design and fabrication have resulted in the availability of low-cost, low-power, small-sized devices that have significant computational power and are able to communicate wirelessly. In addition, advances in MEMS (Micro Electric Mechanical Systems) technology have resulted in wide availability of solid-state sensors and actuators. The net result is ubiquitous sensing, communication, and computation that can be incorporated into small low-power devices.

In this talk, I will discuss how the above-mentioned technological advances present important opportunities and interesting challenges for control system designers. To this effect, I will discuss how the introduction of digital communication in control loops gives rise to a need for new tools for the design and analysis of feedback control systems. I will also describe recent work demonstrating that feedback control based on on-line optimization is a viable approach to solve a wide range of control problem.

Bio:

João P. Hespanha received his Ph.D. degree in electrical engineering and applied science from Yale University, New Haven, Connecticut in 1998. From 1999 to 2001, he was Assistant Professor at the University of Southern California, Los Angeles. He moved to the University of California, Santa Barbara in 2002, where he currently holds a Professor position with the Department of Electrical and Computer Engineering. Prof. Hespanha is the Chair of the Department of Electrical and Computer Engineering and a member of the Executive Committee for the Institute for Collaborative Biotechnologies (ICB). Dr. Hespanha is a Fellow of the IEEE and was an IEEE distinguished lecturer from 2007 to 2013. His current research interests include hybrid and switched systems; multi-agent control systems; distributed control over communication networks (also known as networked control systems); the use of vision in feedback control; stochastic modeling in biology; and network security.


The Quest for Average Response Time

Sep 22, 2016, 4-5pm, 250 SDH, Tom Henzinger, IST Austria.

Slides

Abstract

Responsiveness -the requirement that every request to a system be eventually handled- is one of the fundamental liveness properties of a reactive system and lies at the heart of all methods for specifying and verifying liveness. Average response time is a quantitative measure for the responsiveness requirement used commonly in performance evaluation. The static computation of average response time has proved remarkably elusive even for finite-state models of reactive systems. We present, for the first time, a robust formalism that allows the specification and computation of quantitative temporal properties including average response time. The formalism is based on nested weighted automata, which can serve as monitors for measuring the response time of a reactive system. We show that the average response time can be computed in exponential space for nondeterministic finite-state models of reactive systems and in polynomial time for probabilistic finite-state models. This work is joint with Krishnendu Chatterjee and Jan Otop.

Bio:

Thomas A. Henzinger is president of IST Austria (Institute of Science and Technology Austria). He holds a Dipl.-Ing. degree in Computer Science from Kepler University in Linz, Austria, an M.S. degree in Computer and Information Sciences from the University of Delaware, a Ph.D. degree in Computer Science from Stanford University (1991), and a Dr.h.c. from Fourier University in Grenoble, France (2012) and from Masaryk University in Brno, Czech Republic (2015). He was Assistant Professor of Computer Science at Cornell University (1992-95), Assistant Professor (1996-97), Associate Professor (1997-98), and Professor (1998-2004) of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He was also Director at the Max-Planck Institute for Computer Science in Saarbruecken, Germany (1999) and Professor of Computer and Communication Sciences at EPFL in Lausanne, Switzerland (2004-09). His research focuses on modern systems theory, especially models, algorithms, and tools for the design and verification of reliable software, hardware, and embedded systems. His HyTech tool was the first model checker for mixed discrete-continuous systems. He is an ISI highly cited researcher, a member of Academia Europaea, a member of the German Academy of Sciences (Leopoldina), a member of the Austrian Academy of Sciences, a Fellow of the AAAS, a Fellow of the ACM, and a Fellow of the IEEE. He has received the Milner Award of the Royal Society, the Wittgenstein Award of the Austrian Science Fund, and an ERC Advanced Investigator Grant.


Collaborative control for human-in-the-loop systems: Optimal interface design and reachability-based collaborative navigation

Sep 26, 2016, 4-5pm, 250 SDH, Meeko Oishi, University of New Mexico.

Slides

Abstract

Methods for the analysis and design of human-in-the-loop systems must account for interactions between the automation, the human, and the environment. We consider two problems: 1) user-interface design, and 2) collaborative navigation in dynamic, uncertain environments. The user-interface, which provides information to the user about the underlying automation, and allows the user to issue input commands to the system, is key for enabling situational awareness and trust of the automation, yet is often designed in an ad-hoc fashion. We use sensor placement techniques to determine the optimal elements for display in the user-interface, and exploit submodularity properties to facilitate solution of the resulting combinatorial optimization problem. We additionally consider the problem of collaborative navigation in dynamic, uncertain environments. While assurances of safety are computationally intractable, solutions that exploit the forward reachable set are real-time compatible. We describe a method to compute the forward stochastic reachable set and its probability measure efficiently, that enables robust performance in difficult planning problems.

Bio:

Meeko Oishi is an Associate Professor of Electrical and Computer Engineering at the University of New Mexico. She received the Ph.D. (2004) and M.S. (2000) in Mechanical Engineering from Stanford University, and a B.S.E. in Mechanical Engineering from Princeton University (1998). Her research interests include nonlinear dynamical systems, hybrid control theory, control of human-in-the-loop systems, reachability analysis, and modeling of motor performance and control in Parkinson�s disease. She previously held a faculty position at the University of British Columbia at Vancouver. She is the recipient of the NSF CAREER Award, the UNM Regents� Lecturer Award, the UNM Teaching Fellowship, the Peter Wall Institute Early Career Scholar Award, the Truman Postdoctoral Fellowship in National Security Science and Engineering, and the George Bienkowski Memorial Prize, Princeton University. She was a Summer Faculty Fellow at AFRL Space Vehicles Directorate 2013�2015.


Adaptive Horizon Model Predictive Control

Sep 29, 2016, 4-5pm, 250 SDH, Arthur Krener, University of California, Davis.

Slides

Abstract

Adaptive Horizon Model Predictive Control (AHMPC) is a scheme for varying the horizon length of Model Predictive Control (MPC) as needed. Its goal is to achieve stabilization with horizons as small as possible so that it can be used on faster or more complicated dynamic process. Beside the standard requirements of MPC including a terminal cost that is a control Lyapunov function, AHMPC requires a terminal feedback that turns the control Lyapunov function into a standard Lyapunov function in some domain around the operating point. But this domain need not be known explicitly. Just as MPC does not compute off-line the optimal cost and the optimal feedback over a large domain instead it computes these quantities on-line when and where they are needed, AHMPC does not compute off-line the domain on which the terminal cost is a control Lyapunov function instead it computes on-line when a state is in this domain.

AHMPC verifies in real time that stabilization is being achieved. This is particularly important when dealing with nonlinear systems because the nonlinear programs that MPC passes to the nonlinear program solver are typically nonconvex. Therefore the solver may return local rather than global solutions and these may fail to be stabilizing. AHMPC detects when the solutions are not stabilizing. If so there is a need to pass different initial guesses to the solver to find global solutions or at least local solutions that are stabilizing.

Bio:

Arthur Krener is a mathematician whose research interests are in developing methods for the control and estimation of nonlinear dynamical systems and stochastic processes. In 1971 he received the PhD in Mathematics from the University of California, Berkeley and joined the faculty of the University of California, Davis. He retired from UCD in 2006 as a Distinguished Professor of Mathematics and he currently is a Distinguished Visiting Professor at the Naval Postgraduate School. He has also held visiting positions at Harvard University, Imperial College, NASA Ames Research Center, the University of California, Berkeley, the University of Paris, the University of Maryland, the University of Padua and North Carolina State University. He is a member of the American Mathematical Association, a Fellow of the Society for Industrial and Applied Mathematics and a Life Fellow of the Institute of Electrical and Electronics Engineers. Krener has held a variety of administrative posts, including Chair of the Department of Mathematics, UC Davis, member of the Committee on Academic Personnel, UC Davis and Chair of the SIAM Activity Group on Control and Systems Theory.


Interaction force reconstruction for humanoid robots

Oct 04, 2016, 3-4pm, 730 SDH, Marilena Vendittelli, Sapienza University of Rome.

Slides

Abstract

Humanoids are, by definition, robotic systems for which the control of interaction forces with the environment is elemental for the accomplishment of any loco-manipulation task. Any feedback controller of the interaction forces would invariably require some form of measure or estimation of the forces actually exchanged between the robot and the environment. In this talk we will discuss the peculiarity of the interaction force estimation problem in the case of humanoids and we will propose methods that do not make use of force sensors and do not limit the interaction to specific points of the humanoid body. The illustration of the proposed approach is completed by experimental validation on the robot NAO. The talk is based on the work in [1] and [2] where two different approaches to interaction force estimation are considered. In [1], an accurate reconstruction of contact forces and identification of the contact point is obtained using joint position and motor current measurements under the assumption of static equilibrium. The method is designed for manipulation task or multi-contact locomotion, where accurate estimation of the contact forces is important for task accomplishment, including equilibrium preservation. Interaction forces are, instead, perceived in [2] through a measure of the equilibrium perturbation. The reconstructed force is used as a guiding force in collaborative tasks and the humanoid locomotion results from the proposed equilibrium preservation strategy.

BIBLIO [1] T. Mattioli, M. Vendittelli, "Interaction force reconstruction for humanoid robots," IEEE Robotics and Automation Letters, 2016. DOI: 10.1109/LRA.2016.2601345

[2] M. Bellaccini, A. Paolillo, L. Lanari, M. Vendittelli, "Manual guidance of humanoid robots without force sensors: preliminary experiments with NAO," 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, May 2014. DOI: 10.1109/ICRA.2014.6907003

Bio:

Marilena Vendittelli is Assistant Professor at Sapienza University of Rome, Department of Computer, Control and Management Engineering (DIAG), since 2001 and a member of the Robotics Laboratory at DIAG. She worked at LAAS-CNRS in Toulouse, France, first as a visiting scholar in 1995-96 and then as a post-doc in 1997-98, funded by a Marie Curie Research Training Grant. The research activity was focused on planning and control of general (i.e., not transformable in a canonical form) nonholonomic systems. She received the PhD degree in Systems Engineering from Sapienza University of Rome in 1997. In 2012 and 2005 she has been Visiting Scholar respectively at the Courant Institute, New York University, and at the Robotics Institute, Carnegie Mellon University.

Her research interests are in robot motion planning and control, with emphasis on non-holonomic and redundant systems of which wheeled mobile robots and humanoids are respectively the most notable examples. In particular, over the years her research activity has dealt with visual control, localization, motion planning, physical interaction for humanoids; modeling and control of UAVs; sensor-based exploration of unknown environments with teams of mobile robots; task-constrained motion planning for kinematically redundant systems; probabilistic methods for robot motion planning and exploration; planning and control of wheeled mobile robots; steering and stabilization of general nonholonomic systems; nilpotent approximations of systems with singularities.

On these research themes she has published more than 60 papers in international conferences and journals. The total number of citations received is 2142 according to Google Scholar.

From January 2010 to December 2013 she has served as Associate Editor for the IEEE Transactions on Robotics. From 2009 to 2012 she has been Associate Editor in the ICRA Conference Editorial Board and from 2008 to 2011 for the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). She has contributed to the organization of several international scientific events among which ICRA 2007 as Registration Chair.

She participates to several EC funded projects, among which SAPHARI (FP7), COMANOID (H2020), SIMPLEXITY (H2020).


Proving that Programs do not Discriminate

Oct 10, 2016, 4-5pm, 250 SDH, Aws Albarghouthi, University of Wisconsin-Madison.

Slides

Abstract

Programs have become powerful arbitrators of a range of significant decisions with far-reaching societal impact -- hiring, welfare allocation, prison sentencing, policing, amongst an ever-growing list. In such scenarios, the program is carrying out a sensitive task, and could potentially be illegally discriminating �- advertently or inadvertently �- against a protected group, e.g., African Americans in the United States.

With the range and sensitivity of algorithmic decisions expanding by the day, the question of whether an algorithm is fair (unbiased) has captured the attention of a broad spectrum of experts, from government regulators and law scholars to computer science theorists. Ultimately, algorithmic fairness is a question about programs and their properties: Does a program discriminate against a subset of the population? In this talk, I will view algorithmic fairness through the lens of program verification. Specifically, I will begin by formalizing the notion of fairness as a probabilistic property of programs. To enable automated verification of fairness, I will show how to reduce the probabilistic verification question to that of volume computation over first-order formulas, and describe a new symbolic volume computation algorithm. Finally, I will present results of applying FairSquare -- the first fairness verification tool -- to a variety of decision-making programs.

Bio:

Aws Albarghouthi is an Assistant Professor of Computer Science at the University of Wisconsin-Madison. He works on software verification, analysis, and synthesis.


Bounded optimality and human cognition

Oct 17, 2016, 4-5pm, 250 SDH, Tom Griffiths, University of California, Berkeley.

Slides

Abstract

Researchers seeking to include models of human cognition in systems that involve interaction between machines and people face an unsatisfying choice: modeling human behavior as completely rational (which is wrong), or trying to formalize the many heuristics identified by psychologists (which can be hard to generalize). I will talk about a different approach to solving this problem that combines the strength of rational models with the ability to capture the nuances of human behavior. This framework, which we call �resource rationality�, is based on the idea of bounded optimality developed in the AI literature: thinking about rationality not in terms of ideal performance obtainable with infinite computational resources, but in terms of the solution that makes best use of our limited computational resources. I will show that this approach can account for behavior related to classic decision-making heuristics, which has traditionally been taken as evidence against human rationality, and discuss some of the implications of this approach in settings such as physical reasoning and reinforcement learning.

Bio:

Tom Griffiths is a Professor of Psychology and Cognitive Science at University of California, Berkeley, where he is also the Director of the Institute of Cognitive and Brain Sciences. His research explores connections between human and machine learning, using ideas from statistics and artificial intelligence to try to understand how people solve the challenging computational problems they encounter in everyday life.


Sampling-Based Motion Planning: From Intelligent CAD to Crowd Simulation to Protein Folding

Oct 24, 2016, 4-5pm, 250 SDH, Nancy Amato, Texas A&M University.

Slides

Abstract

Motion planning has application in many domains such as robotics, animation, virtual prototyping and training, and even protein folding and drug design. Surprisingly, sampling-based planning methods have proven effective on problems from all these domains. In this talk, we describe sampling-based planning and give an overview of some variants developed in our group. We describe in more detail our work related to virtual prototyping, crowd simulation, and protein folding. For virtual prototyping, we show that in some cases a hybrid system incorporating both an automatic planner and haptic user input leads to superior results. For crowd simulation, we describe techniques for evacuation planning and for evaluating architectural designs. Finally, we describe our application of sampling-based motion planners to simulate molecular motions, such as protein and RNA folding.

Bio:

Nancy M. Amato is Regents Professor and Unocal Professor of Computer Science and Engineering at Texas A&M University where she co-directs the Parasol Lab. Her main areas of research focus are motion planning and robotics, computational biology and geometry, and parallel and distributed computing. Amato received undergraduate degrees in Mathematical Sciences and Economics from Stanford University, and M.S. and Ph.D. degrees in Computer Science from UC Berkeley and the University of Illinois, respectively. She was program chair for the 2015 IEEE Intern. Conference on Robotics and Automation (ICRA) and for Robotics: Science and Systems (RSS) in 2016. She is an elected member of the CRA Board of Directors (2014-2017), is co-Chair of CRA-W (2014-2017), and was co-chair of the NCWIT Academic Alliance (2009-2011). She received the 2014 CRA Haberman Award, the inaugural NCWIT Harrold/Notkin Research and Graduate Mentoring Award in 2014, the 2013 IEEE HP/Harriet Rigas Award, and a Texas A&M AFS university-level teaching award in 2011. She received an NSF CAREER Award and is a AAAS Fellow, an ACM Fellow and an IEEE Fellow.


Understanding Others: Robot Learning in Interactions

Oct 26, 2016, 2-3pm, 250 SDH, Subramanian Ramamoorthy, University of Edinburgh.

Slides

Abstract

We are motivated by the problem of building autonomous robots that are able to work collaboratively with other agents, such as human co-workers. One key attribute of such an autonomous system is the ability to make predictions about the actions and intentions of other agents in a dynamic environment - both to interpret the activity context as it is being played out and to adapt actions in response to that contextual information.

I will present recent results addressing questions of how to efficiently represent the hierarchical nature of activities, how to rapidly make inferences about latent factors, such as hidden goals and intent, and how to make optimal decisions in interactions without explicit prior coordination with unknown partners.

Firstly, I will describe a procedure for topological trajectory classification, using the concept of persistent homology, which enables unsupervised extraction of certain kinds of relational concepts in motion data. One use of this representation is in devising a multi-scale version of Bayesian recursive estimation, which is a step towards reliably grounding human instructions in the realized activity.

I will then describe work with a human-robot interface based on the joint use of mobile 3D eye tracking and vision for intention inference. We achieve this through the use of a probabilistic generative model of fixations conditioned on the task that the person is executing. Using preliminary experimental results, I will discuss how this approach is useful in the grounding of plan symbols to their detailed appearance in the environment.

Finally, I will present a model and algorithm for optimal decision making in interactions with unknown partners, based on the use of �policy types� within an incomplete information game, combining the benefits of Harsanyi�s notion of types and Bellman�s notion of optimality in sequential decisions. Using results from human-machine experiments, I will show how this algorithm achieves a better rate of coordination than alternate multi-agent learning algorithms.

Bio:

Dr. Subramanian Ramamoorthy is a Reader (Associate Professor) in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is a Coordinator of the EPSRC Robotarium Research Facility, and Executive Committee Member for the Edinburgh Centre for Robotics. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh, and has been a Visiting Professor at Stanford University and the University of Rome �La Sapienza�. His research focus is on robot learning and decision-making under uncertainty, addressed through a combination machine learning techniques with emphasis on issues of transfer, online and reinforcement learning as well as new representations and analysis techniques based on geometric/topological abstractions.


Using Model Checking Verifications at Runtime: The Proof is only Half the Battle

Nov 07, 2016, 4-5pm, 250 SDH, Ian Mitchell, University of British Columbia.

Slides

Abstract

Recent advances in model checking algorithms for continuous state systems allow us to demonstrate the existence of safe control policies robust to model error for cyber-physical systems (CPS) of practical interest, such as shared control drones or wheelchairs, or automated delivery of anesthesia. However, these verification results are only relevant if we can implement those policies. In this talk I will discuss investigations into three challenges that arise at runtime: The sampled data nature of CPS feedback control, runtime state uncertainty, and human-in-the-loop shared control for older adults with cognitive impairment.

Bio:

Ian M. Mitchell completed his doctoral work in engineering at Stanford University in 2002, spent a year as a postdoctoral researcher at the University of California at Berkeley, and is now an Associate Professor of Computer Science at the University of British Columbia. He is the author of the Toolbox of Level Set Methods, the first publicly available high accuracy implementation of solvers for dynamic implicit surfaces and the time dependent Hamilton-Jacobi equation that works in arbitrary dimension. His research interests include development of algorithms and software for nonlinear differential equations, formal verification, control and planning in cyber-physical and robotic systems, assistive technology and reproducible research.


Safety Verification of Deep Neural Networks

Nov 08, 2016, 4-5pm, 540 Cory, Marta Kwiatkowska, University of Oxford.

Slides

Abstract

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop the first SMT-based automated verification framework for feed-forward multi-layer neural networks that works directly with the code of the network, exploring it layer by layer. We define safety for a region around a data point in a given layer by requiring that all points in the region are assigned the same class label. Working with a notion of a manipulation, a mapping between points that intuitively corresponds to a modification of an image, we employ discretisation to enable exhaustive search of the region. Our method can guarantee that adversarial examples are found for the given region and set of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network, and otherwise the network is declared safe for the given parameters. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks.

Bio:

Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. She led the development of the PRISM model checker (www.prismmodelchecker.org), the leading software tool in the area, winner of the HVC Award 2016, and widely used for research and teaching. Applications of probabilistic model checking have spanned communication and security protocols, nanotechnology designs, power management, game theory, planning and systems biology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska gave the Milner Lecture in 2012 in recognition of "excellent and original theoretical work which has a perceived significance for practical computing" and was awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm in 2014. Her research is supported by the ERC Advanced Grant VERIWARE "From software verification to everyware verification" and the EPSRC Programme Grant in Mobile Autonomy.


Sampling-Based Techniques for Planning and Control of Autonomous Spacecraft and Space Robots

Nov 14, 2016, 4-5pm, 250 SDH, Marco Pavone, Stanford University.

Slides

Abstract

In this talk I will present a novel guidance framework for safely and efficiently maneuvering autonomous aerospace vehicles in dynamic and cluttered environments. I will first discuss some of the unique aspects of the "spacecraft motion planning problem." Then, I will present a sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*) and discuss its adaptation to spacecraft motion planning. In particular, this entails deterministic convergence guarantees, tractable inclusion of differential constraints, and planning under uncertainty via Monte Carlo sampling. I will conclude the talk with an overview of related projects in the field of autonomous aerospace systems currently under investigation in my research group.

Bio:

Dr. Marco Pavone is an Assistant Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on autonomous aerospace vehicles and large-scale robotic networks. He is a recipient of an NSF CAREER Award, a NASA Early Career Faculty Award, a Hellman Faculty Scholar Award, and was named NASA NIAC Fellow in 2011. His work has been recognized with best paper nominations or awards at the Field and Service Robotics Conference (2015), at the Robotics: Science and Systems Conference (2014), and at NASA symposia (2015). He is currently serving as an Associate Editor for the IEEE Control Systems Magazine.


Dealing with Delays Or: How I Learned to Stop Worrying And Love Air Travel

Nov 21, 2016, 4-5pm, 250 SDH, Hamsa Balakrishnan, Massachusetts Institute of Technology.

Slides

Abstract

Flight delays result in significant costs to passengers, airlines, and society as a whole. This fact motivates the development of models that can help characterize air traffic delays, and optimization algorithms that can help us reduce them.

In the first part of the talk, I will describe the development of a new class of networked system models of delay dynamics. These models reflect both the spatial properties (i.e., inter-airport interactions) and temporal patterns of delay propagation. They also provide features that can help assess various aspects of system resilience, and even predict the future evolution of delays.

In the second part of the talk, I will present an integer programming approach to mitigate flight delays by solving large-scale air traffic flow management problems in the presence of capacity uncertainties. The proposed stochastic optimization approach uses column generation to determine the optimal departure times, routes, and cancellation decisions for all flights in the air traffic network, allowing for recourse as the scenarios develop. Using nation-scale examples from the United States, I will demonstrate that the approach is scalable and fast enough for real-time implementation.

Bio:

Hamsa Balakrishnan is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT). Before joining MIT, she was at the NASA Ames Research Center. She received her M.S. and PhD degrees from Stanford University, and a B.Tech. from the Indian Institute of Technology Madras. Her research is in the design, analysis, and implementation of control and optimization algorithms for large-scale cyber-physical infrastructures, with an emphasis on air transportation systems. She is the co-founder and chief scientist of Resilient Ops, Inc.

Prof. Balakrishnan was a recipient of the NSF CAREER Award in 2008, the Kevin Corker Award for Best Paper of ATM-2011, the inaugural CNA Award for Operational Analysis in 2012, the AIAA Lawrence Sperry Award in 2012, and the American Automatic Control Council's Donald P. Eckman Award in 2014. She is an Associate Fellow of the American Institute of Aeronautics and Astronautics.


Optimizing cooperative driving for goods transportation

Nov 28, 2016, 4-5pm, 250 SDH, Karl Johansson, KTH Royal Institute of Technology.

Slides

Abstract

The freight transportation sector is of great importance to our society and the demand for transportation is strongly linked to economic development. Despite the influence transportation has on energy consumption and the environment, road goods transportation today is mainly done by individual long-haulage trucks with no real-time coordination or global optimization. In this talk, we will discuss how this might change in the future through a more integrated transport system with an open market for transport assignments and the collaboration between individual trucks and fleets of trucks. Networked control challenges and solutions on various level of this transportation system will be presented. Extensive experiments done on European highways will illustrate the state-of-the-art. The presentation will be based on joint work over the last ten years with collaborators at KTH and at the truck manufacturer Scania.

Bio:

Karl H. Johansson is Director of the Stockholm Strategic Research Area ICT The Next Generation and Professor at the School of Electrical Engineering, KTH Royal Institute of Technology. He received MSc and PhD degrees in Electrical Engineering from Lund University. He has held visiting positions at UC Berkeley, Caltech, NTU, HKUST Institute of Advanced Studies, and NTNU. His research interests are in networked control systems, cyber-physical systems, and applications in transportation, energy, and automation. He is a member of the IEEE Control Systems Society Board of Governors and the European Control Association Council. He has received several best paper awards and other distinctions, including a ten-year Wallenberg Scholar Grant, a Senior Researcher Position with the Swedish Research Council, and Future Research Leader Award from the Swedish Foundation for Strategic Research. He is Fellow of the IEEE and IEEE Control Systems Society Distinguished Lecturer.


Engineering Touch

Dec 05, 2016, 4-5pm, 250 SDH, Yon Visell, University of California, Santa Barbara.

Slides

Abstract

I will describe recent work in my lab on haptics, soft electronics, and interactive systems. A longstanding goal in engineering has been to design technologies that are able to reflect the amazing perceptual and motor capabilities of biological systems for touch, including the human hand. This objective turns out to be difficult to achieve, due, not least, to our limited understanding of the mechanics underlying touch sensation, i.e. of what it is that is felt when we touch objects in the world. Some of the challenges involved can be traced to the complexity of the mechanical interactions, the high dimensionality of the signals, the multiple length scales, time scales, and physical regimes involved, and the sensitive dependence of what we feel on what we do - the way that touch-elicited mechanical signals depend on the way we move and contact objects. I will describe research that has aimed at addressing these challenges, and will explain how the results are informing the development of new technologies for wearable computing, virtual reality, and robotics.

Bio:

Yon Visell is Assistant Professor of Electrical and Computer Engineering Department and Media Arts and Technology at the University of California, Santa Barbara. Assistant Professor (2013-2015) in the Electrical and Computer Engineering Department, Drexel University. Post-Doctoral Fellow (2011-2012) at the Institute of Intelligent Systems and Robotics, Université Pierre et Marie Curie Paris 06. PhD degree in Electrical and Computer Engineering at McGill University's Center for Intelligent Machines (2011). MA and BA degrees in Physics (Univ. Texas-Austin and Wesleyan Univ.). He spent more than 5 years in industrial R&D at high technology firms including Ableton, where he contributed to music software that is now used by artists worldwide, from Pete Townshend to Vijay Iyer and Nine Inch Nails. His work has received several awards, including a Google Faculty Research Award, and Best Paper awards at the 2010 and 2016 IEEE Haptics Symposium. His electronic music and artworks have exhibited at some of the most prominent cultural venues worldwide.


Previous topic  |  This topic  |  Next topic
Collapse
You are not logged in 
Contact 
©2002-2017 U.C. Regents