Forum
Presentations from Previous Semesters
Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article

Spring 2013 Seminars
Armin Wasicek, 30 Jul 2013
Last updated: 4 Sep 2013


Spring 2013 Seminars


Hybrid System Verification: Progress & Simulations

Jan 22, 2013, 4.10-5pm, Sayan Mitra, University of Illinois at Urbana-Champaign, USA.

Abstract: Design defects in embedded systems can be elusive and expensive. Verification research aims to provide algorithms that automate the process of finding design bugs and in special cases even prove correctness. In this talk, I will present two recent developments in this area from our research: (1) a framework for verification of progress properties and (2) an approach for proving properties from simulations or dynamic data. The first builds on a connection between termination analysis of programs and control theory. It provides algorithms for automatic stability verification for rectangular hybrid models. Augmented with Lyapunov functions these algorithms can handle nonlinear hybrid models as well. The second approach combines static analysis of the system model with dynamically generated data (e.g. simulations or logs) for bounded verification of nonlinear and distributed systems of medium dimensions.

Bio: Sayan Mitra is an Assistant Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research aims to develop mathematical, algorithmic and software tools for design and analysis of distributed and cyber-physical systems. Sayan graduated from MIT in 2007 and spent one year as a post-doctoral researcher at the Center for Mathematics of Information of CalTech. His research group has received several awards; most recently, the National Science Foundation's CAREER award in 2011, AFOSR Young Investigator Research Program Award in 2012, Best paper award at FORTE/FMOODS 2012, Samsung GRO Award 2012.


Networking Infrastructure and Data Management for Cyber-Physical Systems

Feb 5, 2013, 4.10-5pm, Song Han, University of Texas at Austin, USA.

[Slides]

Abstract: A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge.
In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. I will also present a light-weight and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer. This approach makes the the underlying connectivity technologies transparent to the application developers and thus enables rapid application development and efficient migration among different CPS platforms.

Bio: Song Han is currently a Postdoctoral Research Fellow in the University of Texas at Austin. He holds the B.S. degree in computer science from Nanjing University, the M.Phil. degree in computer science from City University of Hong Kong, and the Ph.D. degree in computer science from the University of Texas at Austin. His research interests include Cyber-Physical Systems, Real-Time and Embedded Sensing and Control Systems, Wireless Networks and Mobile Computing.


Automated and semi-automated memory management in real-time

Feb 19, 2013, 4.10-5pm, Jan Vitek, Purdue University, USA.

[Slides]

Abstract: This talk looks back at techniques for rescuing real-time programmers from the tyranny of object pools and manual memory management. I will present a retrospective look at region allocation and real-time garbage collection for hard real-time systems. I will demonstrate the impact of scheduling techniques on predictability and present experimental results obtained on the Fiji Java virtual machine.

Bio: Vitek is Full Professor and Faculty Scholar in the Computer Science Department at Purdue University. He led the development of the Ovm hard real-time Java virtual machine. He is a founder of Fiji systems. His research interests span from programming language design and implementation to real-time and embedded systems. He is the chair of SIGPLAN.


DREAMS tutorial: The particle filter

Feb 20, 2013, 2.10-4pm, Thomas Schön, Linköping University, Sweden

[Slides]

Abstract: The particle filter provides a solution to the state inference problem in nonlinear dynamical systems. This problem is indeed interesting in its own right, but it also shows up as a sub-problem in many relevant areas, such as for example sensor fusion and nonlinear system identification. The aim of this tutorial is to provide you with sufficient knowledge about the particle filter to allow you to start implementing particle filters on your own.

We will start out by providing a brief introduction to probabilistic modeling of dynamical systems in order to be able to clearly define the nonlinear state inference problem under consideration. The next step is to briefly introduce two basic sampling methods, rejection sampling and importance sampling. The latter is then exploited to derive a first working particle filter. The particle filter can be interpreted as a particular member of a general class of algorithms referred to as sequential Monte Carlo (SMC). This relationship is explored in some detail in order to provide additional understanding.

The particle filtering theory has developed at an increasing rate over the last two decades and it is used more and more in solving various applied problems. During this tutorial I focus on the method and to some extent on the underlying theory. Hence, I will not show any real world examples, I save them for my seminar on Thursday, where I will show how the particle filter has been instrumental in solving various nontrivial localization problems.

Bio: Thomas B. Schön is an Associate Professor with the Division of Automatic Control at Linköping University (Linköping, Sweden). He received the BSc degree in Business Administration and Economics in Jan. 2001, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001 and the PhD degree in Automatic Control in Feb. 2006, all from Linköping University. He has held visiting positions with the University of Cambridge (UK) and the University of Newcastle (Australia). He is a Senior member of the IEEE. He received the best teacher award at the Institute of Technology, Linköping University in 2009. Schön's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods. He is active within the fields of machine learning, signal processing and automatic control. He pursue both basic research and applied research, where the latter is typically carried out in collaboration with industry.


Platform Design and Compositional Analysis of Real-Time Embedded Systems

Feb 26, 2013, 4.10-5pm, Linh Thi Xuan Phan, University of Pennsylvania, USA.

Abstract: Real-time embedded systems are becoming increasingly complex; a modern car now contains up to 100 microprocessors that run from 2000 to 3000 software functions, and the next generation of these systems is even more complex. Despite active research on real-time analysis over the past decades, ensuring timing guarantees of these systems in the presence of resource constraints remains a challenge.

In this talk, I will present two of our recent results in this area, which are inspired by techniques in systems and computer networks: 1) I will show how to use traffic shaping to improve the schedulability and the resource interfaces of real-time components with compositional scheduling; and 2) I will present a control-platform co-design method that uses buffer control mechanisms to optimize resources while guaranteeing control stability. If time permits, I will also discuss some ongoing projects, e.g., on multi-mode mixed-criticality real-time systems and cloud computing.

Bio: Linh Thi Xuan Phan is a Research Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania. She received her B.S. degree in 2003 and Ph.D. degree in 2009 in Computer Science from the National University of Singapore (NUS). Her research interests center on developing theoretical methods and tools for designing and analyzing real-time embedded systems, cyber-physical systems, and cloud computing systems. She was a recipient of the Graduate Research Excellence Award from NUS in 2009 for her dissertation.


Tomorrow is Yesterday (or nearly so): Historical Lessons that Foretell the Future of Mobile and (Connected) Embedded Systems

March 5, 2013, 4.10-5pm, Bob Iannucci, Carnegie Mellon University, Silicon Valley Campus, USA.

[Slides]

Abstract: The evolution of mobile computing strongly resembles the evolution of the three generations of computing that preceded it, but with a few crucial differences. Like the past generations, value shifts between hardware, software and services in fairly predictable ways. Unlike past generations, mobile computing is encumbered both by the complexity of distributed computing and by the hard limits imposed by physics and human physiology. Both mobile and connected embedded computing face issues related to power and scale.

This talk examines the evolution of mobile computing, probes some of the difficult problems at its heart, and identifies a handful of challenges that could drive new research. We will examine some of these in detail. We also examine connected embedded computing and its evolution toward a true platform.

Bio: Bob Iannucci is Director of the CyLab Mobility Research Center and is a Distinguished Service Professor at Carnegie Mellon University. He received his Ph.D. in Electrical Engineering and Computer Science from MIT in 1988. Most recently, he served as Chief Technology Officer of Nokia and Head of Nokia Research Center. Under his leadership, NRC made primary contributions to wireless networking standards; created and promulgated what is now the MIPI UniPro interface for high-speed, in-phone interconnectivity; created and commercialized Bluetooth Low Energy; and, together with UC Berkeley, created TrafficWorks (using mobile phones to crowd-source traffic patterns). He remains active as a hands-on systems builder. His most recent iPhone app for radio direction finding is in use in over 70 countries.


Pure Storage Flash Array: An Embedded System on Steroids

March 12, 2013, 4.10-5pm, Marco Sanvido, Pure Storage, USA.

Abstract: The storage industry is currently in the midst of a flash revolution. Today's smartphones, cameras, USB drives, and even laptop computers all use flash storage. However, the $30 billion a year enterprise storage market is still dominated by spinning disk. Flash has large advantages in speed and power consumption, but its disadvantages (price, limited overwrites, large erase block size) have prevented it from being a drop-in replacement for disk in a storage array. The question facing the industry is how to build a competitive hardware-software solution that turns flash into performant, reliable storage scaling to hundreds of TBs and beyond.

In this talk, we describe the design and technical challanges of the Pure FlashArray, an enterprise storage array built from the ground up around consumer flash storage. The array and its software, Purity, play to the advantages of flash while minimizing the downsides. Purity performs all writes to flash in multiples of the erase block size, and it keeps metadata in a key-value store that persists approximate answers, further reducing writes at the cost of extra (cheap) reads. Purity also reduces data stored on flash through a range of techniques, including compression, deduplication, and thin provisioning. The net result is a flash array that delivers a sustained read-write workload of over 100,000 4kb I/O requests per second while maintaining uniform sub-millisecond latency. With many customers seeing 4x or greater data reduction, the Pure FlashArray ends up being cheaper than disk as well.

Bio: Dr Marco Sanvido holds a Dipl.-Ing. degree (1996) and a Dr.techn. degree (2002) in Computer Science from the Swiss Federal Institute of Technology in Zürich, Switzerland (ETHZ). He was a co-founder of weControl, an ETHZ spin-off, where he developed a low-power and real-time embedded systems for autonomous flying vehicles. He was a postdoctoral researcher in Computer Science at the University of California at Berkeley from 2002 to 2004, and thereafter he worked on virtualization at VMware. In 2005 he then became a researcher at Hitachi Global Storage Technologies, where he worked on hard disk drive and solid state drive architectures. Since 2010 Marco joined Pure Storage as a Principal Software Engineer.


Compositionality in system design: interfaces everywhere!

March 19, 2013, 4.10-5pm, Stavros Tripakis, UC Berkeley, USA.

[Slides]

Abstract: Compositional methods, which allow to assemble smaller components into larger systems both efficiently and correctly, are not simply a desirable feature in system design: they are a must for building large and complex systems. A key ingredient for compositionality is that of an "interface". An interface abstracts a component, exposing relevant information while hiding internal details. Motivated by cyber-physical system applications, in this talk I will first present methods for automatic bottom-up synthesis of interfaces from hierarchical models, which is needed for modular code generation from such models. I will then talk about interface theories, which can be seen as behavioral type theories, and are essential for incremental design (when can a component be replaced by another one without compromising the properties of the entire system?).

Bio: Stavros Tripakis obtained a PhD degree in Computer Science at the Verimag Laboratory in Grenoble, France, in 1998. He was a postdoc at UC Berkeley from 1999 to 2001, a CNRS Research Scientist at Verimag from 2001 to 2006, and a Research Scientist at Cadence Research Labs in Berkeley from 2006 to 2008. He is an Associate Researcher at UC Berkeley since 2009, working on model-based and component-based design, verification, testing and synthesis. Dr. Tripakis was co-Chair of the 10th ACM & IEEE Conference on Embedded Software (EMSOFT 2010), Secretary/Treasurer of ACM SIGBED in 2009-2011, and is the current Vice-Chair of ACM SIGBED. His h-index, according to Google scholar, is 35.


Components, Interfaces and Compositions: from SoCs to SOCs

March 26, 2013, 4.10-5pm, Partha Roop, University of Auckland, New Zealand.

[Slides]

Abstract: The widespread advancement of engineering may be attributed to the use of production lines that achieve "compositions of many components" to create complex systems. Examples can be seen in the mass production of computer chips to cars, where "reuse of components" is fundamental. Reuse is possible due to the standardization of the "interface" of the different components. However, reuse is impossible when manufacturers create "incompatible" interfaces.

In computer science, components and interfaces have played a fundamental role. Like conventional engineering domains, in computer science also, we see parallels: both success stories and difficulties in reuse. Successes are almost always related to standardized interfaces and associated algorithms for reuse and composition. In this talk, I will focus on two domains where standardized interfaces may be used successfully: System-on-a-Chip (SoC) and Service Oriented Computing (SOC). While these two domains are diverse and appear to be quite separated, I will present a unifying solution that fits many problems across both domains. In presenting this solution, I will elaborate on techniques for component reuse and component composition that are based on formal methods. In particular, I will rely on simulation relations, discrete controller synthesis, and model/module checking to solve classical issues in both these domains. I will then present some interesting reuse and composition case studies.

Bio: Partha is a Senior Lecturer and is currently the Program Director of the Computer Systems Engineering program at the University of Auckland, New Zealand. His research interests are in the area of Embedded Systems, Real-Time Systems and static timing analysis.

He is particularly interested in static analysis techniques for validation, safety and certification. Partha is an Associate Editor of Elsevier Journal on Embedded Hardware design (MICPRO) and Springer/EURASIP Journal on Embedded Systems. In 2009, he received the Alexander von Humboldt fellowship for experienced researchers from the Humboldt foundation, Germany. Partha has been a visiting professor at Iowa State University, INRIA (France), AIST (Japan) and Kiel University (Germany).


Models and Architectures for Heterogeneous System Design

April 16, 2013, 4.10-5pm, Andreas Gerstlauer, University of Texas at Austin, USA.

[Slides]

Abstract: With traditional boundaries between embedded and general-purpose computing blurring, both domains increasingly deal with complex, heterogeneous, software-intensive yet tightly constrained systems. This creates new challenges for modeling, synthesis and component design at the intersection of applications and architectures. For modeling of inherently dynamic behavior, simulations continue to play an important role. We first present an abstract, host-compiled modeling approach that provides a fast and accurate alternative to traditional solutions. By navigating tradeoffs in coarse-grain executions, full-system simulations in close to real time with more than 90% accuracy become possible. Models then provide the basis for automated design space exploration as part of a system compiler that can automatically synthesize parallel application models onto optimized hardware/software platforms. Finally, we will discuss results on co-design of novel domain-specific components for future heterogeneous platforms. We specifically explore fundamental tradeoffs between flexibility and specialization, and between error acceptance and energy in the design of high-performance yet low-power processors for linear algebra and signal processing applications.

Bio: Andreas Gerstlauer is an Assistant Professor in Electrical and Computer Engineering at The University of Texas at Austin. He received his Ph.D. in Information and Computer Science from the University of California, Irvine (UCI) in 2004. Prior to joining UT Austin in 2008, he was an Assistant Researcher in the Center for Embedded Computer Systems (CECS) at UC Irvine, leading a research group to develop electronic system-level (ESL) design tools. Dr. Gerstlauer is co-author on 3 books and more than 60 conference and journal publications, and his paper on OS modeling was reprinted as one of the most influential contributions in 10 years at DATE. His research interests include system-level design automation, system modeling, design languages and methodologies, and embedded hardware and software synthesis.


Pushback Rate Control: The Design and Field-Testing of an Airport Congestion Control Algorithm

April 23, 2013, 4.10-5pm, Hamsa Balakrishnan, Massachusetts Institute of Technology (MIT), USA.

Abstract: Increased congestion on the airport surface has increased taxi times, fuel consumption, and emissions. In this talk, I will describe how operational data can be used to develop and validate queuing models of airport operations. These models yield new insights on the effect of different factors such as weather, runway usage, and aircraft fleet mix on airport performance. They also help us predict the behavior of an airport under different operating conditions.

I will then show that these queuing models can be used to design Pushback Rate Control, a new airport congestion control technique to regulate the rate at which flights push back from their gates. The algorithm computes optimal pushback rates using approximate dynamic programming, but equally important, is a method that can be implemented in practice because it works in concert with human air traffic controllers. To evaluate the effectiveness and practicality of the algorithm, we conducted field tests with our implementation at Boston's Logan Airport. I will describe the results of these field tests and what we learned in the process.

Bio: Hamsa Balakrishnan is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT). She received a B.Tech in Aerospace Engineering from the Indian Institute of Technology, Madras and a PhD in Aeronautics and Astronautics from Stanford University. Her research interests address various aspects of air transportation systems, including algorithms for air traffic scheduling and routing, integrating weather forecasts into air traffic management and minimizing aviation-related emissions; air traffic surveillance algorithms; and mechanisms for the allocation of airport and airspace resources. She was a recipient of the NSF CAREER Award in 2008, the Kevin Corker Award for Best Paper of ATM-2011, the AIAA Lawrence Sperry Award in 2012, and the inaugural CNA Award for Operational Analysis in 2012.


Going beyond the FPGA with Spacetime

April 30, 2013, 4.10-5pm, Steve Teig, Tabula, USA.

Abstract: The idea of dynamically reconfiguring programmable devices fascinated Turing in the 1930�s. In the early 90�s, DeHon pioneered dynamic reconfiguration within FPGAs, but neither his nor numerous subsequent efforts, both academic and industrial, resulted in a useful and usable product. Over the last several years, we have significantly advanced the hardware, architecture, and software for rapidly reconfiguring, programmable logic: going beyond the FPGA using a body of technology we call �Spacetime�. Spacetime represents two spatial dimensions and one time dimension as a unified 3D framework: a powerful simplification that has enabled us to deliver in production a new category of programmable devices (�3PLDs�) that are far denser, faster, and more capable than FPGAs yet still accompanied by software that automatically maps traditional RTL onto these exotic fabrics. In developing Spacetime, we encountered and resolved many complex, technical challenges that any dense and high-performance reconfigurable device must face, many of which seem never even to have been identified, much less addressed, by any prior effort. In this talk, I will identify some key limitations of FPGAs, introduce Spacetime as a means of addressing them, enumerate some of the many challenges we faced, and present solutions to a couple of them.

Bio: Steve Teig is the President and Chief Technology Officer of Tabula and the inventor of Tabula's Spacetime 3-Dimensional Programmable Logic Architecture. In 2012, Tabula was recognized with an Edison Award, an American Business Gold Award, inclusion in MIT's "TR50" list of the 50 most innovative companies worldwide, and as #3 in the Wall Street Journal's "Next Big Thing" list of the most promising venture-backed startup companies. Prior to founding Tabula, Steve was CTO of Cadence Design Systems, having joined Cadence through its acquisition of Simplex Solutions, where he was also CTO. Prior to Simplex, he was CTO of two successful biotechnology companies: CombiChem and BioCAD. Earlier in his career, he developed foundational logic simulation and place-and-route technologies that continue to have far-reaching influence. Steve received a B.S.E. degree in Electrical Engineering and Computer Science from Princeton University and holds over 250 patents. In 2011, he was awarded the World Technology Award for IT hardware innovation.


Reachability of Hybrid Systems in Space-Time

May 7, 2013, 4.10-5pm, Goran Frehse, Verimag Labs, University of Grenoble, France.

[Slides]

Abstract: Computing the reachable states of a hybrid system is hard, in part because efficient algorithms are available mainly for convex sets, while the states reachable by time elapse, so-called flowpipes, are generally nonconvex. When the dynamics are piecewise affine, the nonconvexity takes a particular form: when drawn along a time axis, the flowpipe is convex at any point in time. We exploit this property to lift efficient approximation techniques from convex sets to flowpipes. The approximation consists of a set of piecewise linear scalar functions over continuous time, which define at each time point a polyhedral inner- and outer approximation. Reachability operations that are difficult to carry out on nonconvex sets correspond to simple operations on these scalar functions. The resulting reachability algorithm is precise, scalable, and provides good error measurements. We present an implementation in the SpaceEx verification platform and showcase the performance on several examples.

Bio: Goran Frehse is an assistant professor of computer science at Verimag Labs, University of Grenoble, France. He holds a Diploma in Electrical Engineering from Karlsruhe Institute of Technology, Germany, and a PhD in Computer Science from Radboud University Nijmegen, The Netherlands. He was a PostDoc at Carnegie Mellon University from 2003 to 2005. Goran Frehse developed PHAVer, a tool for verifying Linear Hybrid Automata, and is leading the ongoing development of SpaceEx, a verification platform for hybrid systems.


Mining Temporal Requirements of an Industrial-Scale Control System

May 14, 2013, 4.10-5pm, Alexandre Donzé, EECS, UC Berkeley.

[Slides]

Abstract: Industrial-scale control systems are more and more developed in the model-based design paradigm. This typically involves capturing a plant model that describes the dynamical characteristics of the physical processes within the system, and a controller model, which is a block-diagram-based representation of the software used to regulate the plant behavior. In practice, plant models and controller models are highly complex as they can contain nonlinear hybrid dynamics, look-up tables storing pre-computed values, several levels of design-hierarchy, design-blocks that operate at different frequencies, and so on. Moreover, the biggest challenge is that system requirements are often imprecise, non-modular, evolving, or even simply unknown. As a result, formal techniques have been unable to digest the scale and format of industrial-scale control systems.

On the other hand, the Simulink modeling language -- a popular language for describing such models -- is widely used as a high-fidelity simulation tool in the industry, and is routinely used by control designers to experimentally validate their controller designs. This raises the question: "What can we accomplish if all we have is a very complex Simulink model of a control system?"

In this talk, we give an example of a simulation-guided formal technique that can help characterize temporal properties of the system, or guide the search for design behaviors that do not conform to "good behavior". Specifically, we present a way to algorithmically mine temporal assertions from a Simulink model. The input to our algorithm is a requirement template expressed in Parametric Signal Temporal Logic -- a formalism to express temporal formulas in which concrete signal or time values are replaced by parameters. Our algorithm is an instance of counterexample-guided inductive synthesis: an intermediate candidate requirement is synthesized from simulation traces of the system, which is refined using counterexamples to the candidate obtained with the help of a falsification tool. The algorithm terminates when no counterexample is found. Mining has many usage scenarios: mined requirements can be used to validate future modifications of the model, they can be used to enhance understanding of legacy models, and can also guide the process of bug-finding through simulations. (Joint work with X. Jin, J. Deshmuck and S. Seshia).

Bio: Alexandre Donzé is a post-doctoral faculty at the department of Electrical Engineering and Computer Science of UC Berkeley, working under the supervision of Sanjit Seshia. He holds a PhD of Mathematics and Computer Science from the University of Joseph Fourier at Grenoble. From 2008 to 2009, he was also a postdoc at the university of Carnegie Mellon, working with Edmund Clarke and Bruce Krogh. The main goal of his research is to develop mathematical and computational tools for the analysis and the design of dynamical systems arising from different domains, in particular embedded systems (or software interacting with a physical environment), analog and mixed signal circuits and biological systems. He developed Breach, a toolbox that is today getting the attention of large groups in the automative industry such as Toyata and Bosh, and (modestly) contributed to Spaceex, the current leading model-checking tool for hybrid systems.


Logic of Hybrid Games

May 20, 2013, 4.10-5pm, Andre Platzer, Carnegie Mellon University, USA.

Abstract: Hybrid systems model cyber-physical systems as dynamical systems with interacting discrete transitions and continuous evolutions along differential equations. They arise frequently in many application domains, including aviation, automotive, railway, and robotics. We study hybrid games, i.e. games on hybrid systems combining discrete and continuous dynamics. Unlike hybrid systems, hybrid games allow choices in the system dynamics to be resolved adversarially by different players with different objectives.

This talk describes how logic and formal verification can be lifted to hybrid games. The talk describes a logic for hybrid systems called differential game logic dGL. The logic dGL can be used to study the existence of winning strategies for hybrid games. We present a simple sound and complete axiomatization of dGL relative to the fixpoint logic of differential equations. We prove hybrid games to be determined and their winning regions to require higher closure ordinals and we identify separating axioms, i.e. axioms that distinguish hybrid games from hybrid systems.

Bio: Andre Platzer is an Assistant Professor in the Computer Science Department at Carnegie Mellon University. He is working on logic and verification for cyber-physical systems and led the development of the KeYmaera prover for hybrid systems. Andre Platzer has a Ph.D. from the University of Oldenburg, Germany, and received an ACM Doctoral Dissertation Honorable Mention and NSF CAREER award, and was named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI's 10 to Watch by the IEEE Intelligent Systems Magazine.


Co-design of Control Algorithm and Embedded Platform for Building HVAC Systems

May 21, 2013, 4.10-5pm, Mehdi Maasoumy, EECS, UC Berkeley.

[Slides]

Abstract: The design of heating, ventilation and air conditioning (HVAC) systems is crucial for reducing energy consumption in buildings. As complex cyber-physical systems, HVAC systems involve three closely-related subsystems � the control algorithm, the physical building and environment and the embedded implementation platform. In the traditional topdown approach, the control algorithm and the embedded platform are in general designed separately leading to sub-optimal systems. We propose a co-design approach that analyzes the interaction between the control algorithm and the embedded platform through a set of interface variables. We present six control algorithms that take into account the sensing error, and model the relation of control performance and cost versus sensing error. We also capture the relation of embedded platform cost versus sensing error by analysis of the collected data from a testbed. Based on these models, we explore the co-design of the control algorithm and the temperature sensing subsystem of the embedded platform to optimize with respect to energy cost and monetary cost while satisfying the constraints for user comfort level.

Bio: Mehdi Maasoumy is a PhD candidate in Mechanical Engineering at University of California, Berkeley. His research interests include modeling and optimal control of linear, nonlinear, and hybrid systems with applications in energy efficient buildings, smart grid, and aircraft electric power systems. He has a M.Sc. degree from University of California, Berkeley and a B.Sc. degree from Sharif University of technology, both in Mechanical Engineering. He is the recipient of the Best Paper Award at the 4th ACM/IEEE international conference on cyber-physical systems and Outstanding Preliminary Examination Award from UC Berkeley. More information on his work is available at: http://eecs.berkeley.edu/~maasoumy/


Schedulability, Deadlock Freedom, and Performance Analysis of Timed Actors

June 4, 2013, 4.10-5pm, Marjan Sirjani, Reykjavik University, Iceland.

[Slides]

Abstract: Asynchronous event-based systems can be modeled using Timed Rebeca with least effort. Network and computational delays, periodic events, and required deadlines can be expressed in the model. The language is actor-based and the syntax is Java-like. Model checking and simulation tools are built based on the formal semantics of the language given as SOS rules. For deadlock-freedom and schedulability analysis a so-called floating-time transition system is introduced which reduced the state space by exploiting the isolation of method execution in the model. For performance analysis, a simulation tool is developed which uses McErlang as the backend. Simulation traces are saved in SQL database for further analysis. Case studies are used to show the applicability of the language and the tools.

Bio: Marjan Sirjani is an Associate Professor at Reykjavik University. Marjan and her research group are pioneers in building model checking tools, compositional verification theories, and state-space reduction techniques for actor-based models. She has been working on analyzing actors since 2001 using the modeling language Rebeca (www.rebeca-lang.org). She has also worked on coordination languages and is the co-author of the paper proposing Constraint Automata as the first compositional formal semantics for the coordination language Reo. Marjan has been the PC member and PC chair of several international conferences including Coordination, FM, FMICS, ICFEM, FSEN, and guest editor for special issues of the journals Science of Computer Programming and Fundamenta Informaticae. Before joining academia as a full-time faculty she has been the managing director of Behin System Company for more than ten years, developing software and providing system services. Marjan served as the head of the Software Engineering Department of School of Electrical and Computer Engineering at the University of Tehran prior to joining the School of Computer Science at Reykjavik University in 2008.


Past Seminars


Hybrid System Verification: Progress & Simulations

Jan 22, 2013, 4.10-5pm, Sayan Mitra, University of Illinois at Urbana-Champaign, USA.

Abstract: Design defects in embedded systems can be elusive and expensive. Verification research aims to provide algorithms that automate the process of finding design bugs and in special cases even prove correctness. In this talk, I will present two recent developments in this area from our research: (1) a framework for verification of progress properties and (2) an approach for proving properties from simulations or dynamic data. The first builds on a connection between termination analysis of programs and control theory. It provides algorithms for automatic stability verification for rectangular hybrid models. Augmented with Lyapunov functions these algorithms can handle nonlinear hybrid models as well. The second approach combines static analysis of the system model with dynamically generated data (e.g. simulations or logs) for bounded verification of nonlinear and distributed systems of medium dimensions.

Bio: Sayan Mitra is an Assistant Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research aims to develop mathematical, algorithmic and software tools for design and analysis of distributed and cyber-physical systems. Sayan graduated from MIT in 2007 and spent one year as a post-doctoral researcher at the Center for Mathematics of Information of CalTech. His research group has received several awards; most recently, the National Science Foundation's CAREER award in 2011, AFOSR Young Investigator Research Program Award in 2012, Best paper award at FORTE/FMOODS 2012, Samsung GRO Award 2012.


Networking Infrastructure and Data Management for Cyber-Physical Systems

Feb 5, 2013, 4.10-5pm, Song Han, University of Texas at Austin, USA.

[Slides]

Abstract: A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system's computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge.
In this talk, I will first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. I will describe the network management techniques for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. I will also present a light-weight and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer. This approach makes the the underlying connectivity technologies transparent to the application developers and thus enables rapid application development and efficient migration among different CPS platforms.

Bio: Song Han is currently a Postdoctoral Research Fellow in the University of Texas at Austin. He holds the B.S. degree in computer science from Nanjing University, the M.Phil. degree in computer science from City University of Hong Kong, and the Ph.D. degree in computer science from the University of Texas at Austin. His research interests include Cyber-Physical Systems, Real-Time and Embedded Sensing and Control Systems, Wireless Networks and Mobile Computing.


Automated and semi-automated memory management in real-time

Feb 19, 2013, 4.10-5pm, Jan Vitek, Purdue University, USA.

[Slides]

Abstract: This talk looks back at techniques for rescuing real-time programmers from the tyranny of object pools and manual memory management. I will present a retrospective look at region allocation and real-time garbage collection for hard real-time systems. I will demonstrate the impact of scheduling techniques on predictability and present experimental results obtained on the Fiji Java virtual machine.

Bio: Vitek is Full Professor and Faculty Scholar in the Computer Science Department at Purdue University. He led the development of the Ovm hard real-time Java virtual machine. He is a founder of Fiji systems. His research interests span from programming language design and implementation to real-time and embedded systems. He is the chair of SIGPLAN.


DREAMS tutorial: The particle filter

Feb 20, 2013, 2.10-4pm, Thomas Schön, Linköping University, Sweden

[Slides]

Abstract: The particle filter provides a solution to the state inference problem in nonlinear dynamical systems. This problem is indeed interesting in its own right, but it also shows up as a sub-problem in many relevant areas, such as for example sensor fusion and nonlinear system identification. The aim of this tutorial is to provide you with sufficient knowledge about the particle filter to allow you to start implementing particle filters on your own.

We will start out by providing a brief introduction to probabilistic modeling of dynamical systems in order to be able to clearly define the nonlinear state inference problem under consideration. The next step is to briefly introduce two basic sampling methods, rejection sampling and importance sampling. The latter is then exploited to derive a first working particle filter. The particle filter can be interpreted as a particular member of a general class of algorithms referred to as sequential Monte Carlo (SMC). This relationship is explored in some detail in order to provide additional understanding.

The particle filtering theory has developed at an increasing rate over the last two decades and it is used more and more in solving various applied problems. During this tutorial I focus on the method and to some extent on the underlying theory. Hence, I will not show any real world examples, I save them for my seminar on Thursday, where I will show how the particle filter has been instrumental in solving various nontrivial localization problems.

Bio: Thomas B. Schön is an Associate Professor with the Division of Automatic Control at Linköping University (Linköping, Sweden). He received the BSc degree in Business Administration and Economics in Jan. 2001, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001 and the PhD degree in Automatic Control in Feb. 2006, all from Linköping University. He has held visiting positions with the University of Cambridge (UK) and the University of Newcastle (Australia). He is a Senior member of the IEEE. He received the best teacher award at the Institute of Technology, Linköping University in 2009. Schön's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods. He is active within the fields of machine learning, signal processing and automatic control. He pursue both basic research and applied research, where the latter is typically carried out in collaboration with industry.


Platform Design and Compositional Analysis of Real-Time Embedded Systems

Feb 26, 2013, 4.10-5pm, Linh Thi Xuan Phan, University of Pennsylvania, USA.

Abstract: Real-time embedded systems are becoming increasingly complex; a modern car now contains up to 100 microprocessors that run from 2000 to 3000 software functions, and the next generation of these systems is even more complex. Despite active research on real-time analysis over the past decades, ensuring timing guarantees of these systems in the presence of resource constraints remains a challenge.

In this talk, I will present two of our recent results in this area, which are inspired by techniques in systems and computer networks: 1) I will show how to use traffic shaping to improve the schedulability and the resource interfaces of real-time components with compositional scheduling; and 2) I will present a control-platform co-design method that uses buffer control mechanisms to optimize resources while guaranteeing control stability. If time permits, I will also discuss some ongoing projects, e.g., on multi-mode mixed-criticality real-time systems and cloud computing.

Bio: Linh Thi Xuan Phan is a Research Assistant Professor in the Department of Computer and Information Science at the University of Pennsylvania. She received her B.S. degree in 2003 and Ph.D. degree in 2009 in Computer Science from the National University of Singapore (NUS). Her research interests center on developing theoretical methods and tools for designing and analyzing real-time embedded systems, cyber-physical systems, and cloud computing systems. She was a recipient of the Graduate Research Excellence Award from NUS in 2009 for her dissertation.


Tomorrow is Yesterday (or nearly so): Historical Lessons that Foretell the Future of Mobile and (Connected) Embedded Systems

March 5, 2013, 4.10-5pm, Bob Iannucci, Carnegie Mellon University, Silicon Valley Campus, USA.

[Slides]

Abstract: The evolution of mobile computing strongly resembles the evolution of the three generations of computing that preceded it, but with a few crucial differences. Like the past generations, value shifts between hardware, software and services in fairly predictable ways. Unlike past generations, mobile computing is encumbered both by the complexity of distributed computing and by the hard limits imposed by physics and human physiology. Both mobile and connected embedded computing face issues related to power and scale.

This talk examines the evolution of mobile computing, probes some of the difficult problems at its heart, and identifies a handful of challenges that could drive new research. We will examine some of these in detail. We also examine connected embedded computing and its evolution toward a true platform.

Bio: Bob Iannucci is Director of the CyLab Mobility Research Center and is a Distinguished Service Professor at Carnegie Mellon University. He received his Ph.D. in Electrical Engineering and Computer Science from MIT in 1988. Most recently, he served as Chief Technology Officer of Nokia and Head of Nokia Research Center. Under his leadership, NRC made primary contributions to wireless networking standards; created and promulgated what is now the MIPI UniPro interface for high-speed, in-phone interconnectivity; created and commercialized Bluetooth Low Energy; and, together with UC Berkeley, created TrafficWorks (using mobile phones to crowd-source traffic patterns). He remains active as a hands-on systems builder. His most recent iPhone app for radio direction finding is in use in over 70 countries.


Pure Storage Flash Array: An Embedded System on Steroids

March 12, 2013, 4.10-5pm, Marco Sanvido, Pure Storage, USA.

Abstract: The storage industry is currently in the midst of a flash revolution. Today's smartphones, cameras, USB drives, and even laptop computers all use flash storage. However, the $30 billion a year enterprise storage market is still dominated by spinning disk. Flash has large advantages in speed and power consumption, but its disadvantages (price, limited overwrites, large erase block size) have prevented it from being a drop-in replacement for disk in a storage array. The question facing the industry is how to build a competitive hardware-software solution that turns flash into performant, reliable storage scaling to hundreds of TBs and beyond.

In this talk, we describe the design and technical challanges of the Pure FlashArray, an enterprise storage array built from the ground up around consumer flash storage. The array and its software, Purity, play to the advantages of flash while minimizing the downsides. Purity performs all writes to flash in multiples of the erase block size, and it keeps metadata in a key-value store that persists approximate answers, further reducing writes at the cost of extra (cheap) reads. Purity also reduces data stored on flash through a range of techniques, including compression, deduplication, and thin provisioning. The net result is a flash array that delivers a sustained read-write workload of over 100,000 4kb I/O requests per second while maintaining uniform sub-millisecond latency. With many customers seeing 4x or greater data reduction, the Pure FlashArray ends up being cheaper than disk as well.

Bio: Dr Marco Sanvido holds a Dipl.-Ing. degree (1996) and a Dr.techn. degree (2002) in Computer Science from the Swiss Federal Institute of Technology in Zürich, Switzerland (ETHZ). He was a co-founder of weControl, an ETHZ spin-off, where he developed a low-power and real-time embedded systems for autonomous flying vehicles. He was a postdoctoral researcher in Computer Science at the University of California at Berkeley from 2002 to 2004, and thereafter he worked on virtualization at VMware. In 2005 he then became a researcher at Hitachi Global Storage Technologies, where he worked on hard disk drive and solid state drive architectures. Since 2010 Marco joined Pure Storage as a Principal Software Engineer.


Compositionality in system design: interfaces everywhere!

March 19, 2013, 4.10-5pm, Stavros Tripakis, UC Berkeley, USA.

[Slides]

Abstract: Compositional methods, which allow to assemble smaller components into larger systems both efficiently and correctly, are not simply a desirable feature in system design: they are a must for building large and complex systems. A key ingredient for compositionality is that of an "interface". An interface abstracts a component, exposing relevant information while hiding internal details. Motivated by cyber-physical system applications, in this talk I will first present methods for automatic bottom-up synthesis of interfaces from hierarchical models, which is needed for modular code generation from such models. I will then talk about interface theories, which can be seen as behavioral type theories, and are essential for incremental design (when can a component be replaced by another one without compromising the properties of the entire system?).

Bio: Stavros Tripakis obtained a PhD degree in Computer Science at the Verimag Laboratory in Grenoble, France, in 1998. He was a postdoc at UC Berkeley from 1999 to 2001, a CNRS Research Scientist at Verimag from 2001 to 2006, and a Research Scientist at Cadence Research Labs in Berkeley from 2006 to 2008. He is an Associate Researcher at UC Berkeley since 2009, working on model-based and component-based design, verification, testing and synthesis. Dr. Tripakis was co-Chair of the 10th ACM & IEEE Conference on Embedded Software (EMSOFT 2010), Secretary/Treasurer of ACM SIGBED in 2009-2011, and is the current Vice-Chair of ACM SIGBED. His h-index, according to Google scholar, is 35.


Components, Interfaces and Compositions: from SoCs to SOCs

March 26, 2013, 4.10-5pm, Partha Roop, University of Auckland, New Zealand.

[Slides]

Abstract: The widespread advancement of engineering may be attributed to the use of production lines that achieve "compositions of many components" to create complex systems. Examples can be seen in the mass production of computer chips to cars, where "reuse of components" is fundamental. Reuse is possible due to the standardization of the "interface" of the different components. However, reuse is impossible when manufacturers create "incompatible" interfaces.

In computer science, components and interfaces have played a fundamental role. Like conventional engineering domains, in computer science also, we see parallels: both success stories and difficulties in reuse. Successes are almost always related to standardized interfaces and associated algorithms for reuse and composition. In this talk, I will focus on two domains where standardized interfaces may be used successfully: System-on-a-Chip (SoC) and Service Oriented Computing (SOC). While these two domains are diverse and appear to be quite separated, I will present a unifying solution that fits many problems across both domains. In presenting this solution, I will elaborate on techniques for component reuse and component composition that are based on formal methods. In particular, I will rely on simulation relations, discrete controller synthesis, and model/module checking to solve classical issues in both these domains. I will then present some interesting reuse and composition case studies.

Bio: Partha is a Senior Lecturer and is currently the Program Director of the Computer Systems Engineering program at the University of Auckland, New Zealand. His research interests are in the area of Embedded Systems, Real-Time Systems and static timing analysis.

He is particularly interested in static analysis techniques for validation, safety and certification. Partha is an Associate Editor of Elsevier Journal on Embedded Hardware design (MICPRO) and Springer/EURASIP Journal on Embedded Systems. In 2009, he received the Alexander von Humboldt fellowship for experienced researchers from the Humboldt foundation, Germany. Partha has been a visiting professor at Iowa State University, INRIA (France), AIST (Japan) and Kiel University (Germany).


Models and Architectures for Heterogeneous System Design

April 16, 2013, 4.10-5pm, Andreas Gerstlauer, University of Texas at Austin, USA.

[Slides]

Abstract: With traditional boundaries between embedded and general-purpose computing blurring, both domains increasingly deal with complex, heterogeneous, software-intensive yet tightly constrained systems. This creates new challenges for modeling, synthesis and component design at the intersection of applications and architectures. For modeling of inherently dynamic behavior, simulations continue to play an important role. We first present an abstract, host-compiled modeling approach that provides a fast and accurate alternative to traditional solutions. By navigating tradeoffs in coarse-grain executions, full-system simulations in close to real time with more than 90% accuracy become possible. Models then provide the basis for automated design space exploration as part of a system compiler that can automatically synthesize parallel application models onto optimized hardware/software platforms. Finally, we will discuss results on co-design of novel domain-specific components for future heterogeneous platforms. We specifically explore fundamental tradeoffs between flexibility and specialization, and between error acceptance and energy in the design of high-performance yet low-power processors for linear algebra and signal processing applications.

Bio: Andreas Gerstlauer is an Assistant Professor in Electrical and Computer Engineering at The University of Texas at Austin. He received his Ph.D. in Information and Computer Science from the University of California, Irvine (UCI) in 2004. Prior to joining UT Austin in 2008, he was an Assistant Researcher in the Center for Embedded Computer Systems (CECS) at UC Irvine, leading a research group to develop electronic system-level (ESL) design tools. Dr. Gerstlauer is co-author on 3 books and more than 60 conference and journal publications, and his paper on OS modeling was reprinted as one of the most influential contributions in 10 years at DATE. His research interests include system-level design automation, system modeling, design languages and methodologies, and embedded hardware and software synthesis.


Pushback Rate Control: The Design and Field-Testing of an Airport Congestion Control Algorithm

April 23, 2013, 4.10-5pm, Hamsa Balakrishnan, Massachusetts Institute of Technology (MIT), USA.

Abstract: Increased congestion on the airport surface has increased taxi times, fuel consumption, and emissions. In this talk, I will describe how operational data can be used to develop and validate queuing models of airport operations. These models yield new insights on the effect of different factors such as weather, runway usage, and aircraft fleet mix on airport performance. They also help us predict the behavior of an airport under different operating conditions.

I will then show that these queuing models can be used to design Pushback Rate Control, a new airport congestion control technique to regulate the rate at which flights push back from their gates. The algorithm computes optimal pushback rates using approximate dynamic programming, but equally important, is a method that can be implemented in practice because it works in concert with human air traffic controllers. To evaluate the effectiveness and practicality of the algorithm, we conducted field tests with our implementation at Boston's Logan Airport. I will describe the results of these field tests and what we learned in the process.

Bio: Hamsa Balakrishnan is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT). She received a B.Tech in Aerospace Engineering from the Indian Institute of Technology, Madras and a PhD in Aeronautics and Astronautics from Stanford University. Her research interests address various aspects of air transportation systems, including algorithms for air traffic scheduling and routing, integrating weather forecasts into air traffic management and minimizing aviation-related emissions; air traffic surveillance algorithms; and mechanisms for the allocation of airport and airspace resources. She was a recipient of the NSF CAREER Award in 2008, the Kevin Corker Award for Best Paper of ATM-2011, the AIAA Lawrence Sperry Award in 2012, and the inaugural CNA Award for Operational Analysis in 2012.


Going beyond the FPGA with Spacetime

April 30, 2013, 4.10-5pm, Steve Teig, Tabula, USA.

Abstract: The idea of dynamically reconfiguring programmable devices fascinated Turing in the 1930�s. In the early 90�s, DeHon pioneered dynamic reconfiguration within FPGAs, but neither his nor numerous subsequent efforts, both academic and industrial, resulted in a useful and usable product. Over the last several years, we have significantly advanced the hardware, architecture, and software for rapidly reconfiguring, programmable logic: going beyond the FPGA using a body of technology we call �Spacetime�. Spacetime represents two spatial dimensions and one time dimension as a unified 3D framework: a powerful simplification that has enabled us to deliver in production a new category of programmable devices (�3PLDs�) that are far denser, faster, and more capable than FPGAs yet still accompanied by software that automatically maps traditional RTL onto these exotic fabrics. In developing Spacetime, we encountered and resolved many complex, technical challenges that any dense and high-performance reconfigurable device must face, many of which seem never even to have been identified, much less addressed, by any prior effort. In this talk, I will identify some key limitations of FPGAs, introduce Spacetime as a means of addressing them, enumerate some of the many challenges we faced, and present solutions to a couple of them.

Bio: Steve Teig is the President and Chief Technology Officer of Tabula and the inventor of Tabula's Spacetime 3-Dimensional Programmable Logic Architecture. In 2012, Tabula was recognized with an Edison Award, an American Business Gold Award, inclusion in MIT's "TR50" list of the 50 most innovative companies worldwide, and as #3 in the Wall Street Journal's "Next Big Thing" list of the most promising venture-backed startup companies. Prior to founding Tabula, Steve was CTO of Cadence Design Systems, having joined Cadence through its acquisition of Simplex Solutions, where he was also CTO. Prior to Simplex, he was CTO of two successful biotechnology companies: CombiChem and BioCAD. Earlier in his career, he developed foundational logic simulation and place-and-route technologies that continue to have far-reaching influence. Steve received a B.S.E. degree in Electrical Engineering and Computer Science from Princeton University and holds over 250 patents. In 2011, he was awarded the World Technology Award for IT hardware innovation.


Reachability of Hybrid Systems in Space-Time

May 7, 2013, 4.10-5pm, Goran Frehse, Verimag Labs, University of Grenoble, France.

[Slides]

Abstract: Computing the reachable states of a hybrid system is hard, in part because efficient algorithms are available mainly for convex sets, while the states reachable by time elapse, so-called flowpipes, are generally nonconvex. When the dynamics are piecewise affine, the nonconvexity takes a particular form: when drawn along a time axis, the flowpipe is convex at any point in time. We exploit this property to lift efficient approximation techniques from convex sets to flowpipes. The approximation consists of a set of piecewise linear scalar functions over continuous time, which define at each time point a polyhedral inner- and outer approximation. Reachability operations that are difficult to carry out on nonconvex sets correspond to simple operations on these scalar functions. The resulting reachability algorithm is precise, scalable, and provides good error measurements. We present an implementation in the SpaceEx verification platform and showcase the performance on several examples.

Bio: Goran Frehse is an assistant professor of computer science at Verimag Labs, University of Grenoble, France. He holds a Diploma in Electrical Engineering from Karlsruhe Institute of Technology, Germany, and a PhD in Computer Science from Radboud University Nijmegen, The Netherlands. He was a PostDoc at Carnegie Mellon University from 2003 to 2005. Goran Frehse developed PHAVer, a tool for verifying Linear Hybrid Automata, and is leading the ongoing development of SpaceEx, a verification platform for hybrid systems.


Mining Temporal Requirements of an Industrial-Scale Control System

May 14, 2013, 4.10-5pm, Alexandre Donzé, EECS, UC Berkeley.

[Slides]

Abstract: Industrial-scale control systems are more and more developed in the model-based design paradigm. This typically involves capturing a plant model that describes the dynamical characteristics of the physical processes within the system, and a controller model, which is a block-diagram-based representation of the software used to regulate the plant behavior. In practice, plant models and controller models are highly complex as they can contain nonlinear hybrid dynamics, look-up tables storing pre-computed values, several levels of design-hierarchy, design-blocks that operate at different frequencies, and so on. Moreover, the biggest challenge is that system requirements are often imprecise, non-modular, evolving, or even simply unknown. As a result, formal techniques have been unable to digest the scale and format of industrial-scale control systems.

On the other hand, the Simulink modeling language -- a popular language for describing such models -- is widely used as a high-fidelity simulation tool in the industry, and is routinely used by control designers to experimentally validate their controller designs. This raises the question: "What can we accomplish if all we have is a very complex Simulink model of a control system?"

In this talk, we give an example of a simulation-guided formal technique that can help characterize temporal properties of the system, or guide the search for design behaviors that do not conform to "good behavior". Specifically, we present a way to algorithmically mine temporal assertions from a Simulink model. The input to our algorithm is a requirement template expressed in Parametric Signal Temporal Logic -- a formalism to express temporal formulas in which concrete signal or time values are replaced by parameters. Our algorithm is an instance of counterexample-guided inductive synthesis: an intermediate candidate requirement is synthesized from simulation traces of the system, which is refined using counterexamples to the candidate obtained with the help of a falsification tool. The algorithm terminates when no counterexample is found. Mining has many usage scenarios: mined requirements can be used to validate future modifications of the model, they can be used to enhance understanding of legacy models, and can also guide the process of bug-finding through simulations. (Joint work with X. Jin, J. Deshmuck and S. Seshia).

Bio: Alexandre Donzé is a post-doctoral faculty at the department of Electrical Engineering and Computer Science of UC Berkeley, working under the supervision of Sanjit Seshia. He holds a PhD of Mathematics and Computer Science from the University of Joseph Fourier at Grenoble. From 2008 to 2009, he was also a postdoc at the university of Carnegie Mellon, working with Edmund Clarke and Bruce Krogh. The main goal of his research is to develop mathematical and computational tools for the analysis and the design of dynamical systems arising from different domains, in particular embedded systems (or software interacting with a physical environment), analog and mixed signal circuits and biological systems. He developed Breach, a toolbox that is today getting the attention of large groups in the automative industry such as Toyata and Bosh, and (modestly) contributed to Spaceex, the current leading model-checking tool for hybrid systems.


Logic of Hybrid Games

May 20, 2013, 4.10-5pm, Andre Platzer, Carnegie Mellon University, USA.

Abstract: Hybrid systems model cyber-physical systems as dynamical systems with interacting discrete transitions and continuous evolutions along differential equations. They arise frequently in many application domains, including aviation, automotive, railway, and robotics. We study hybrid games, i.e. games on hybrid systems combining discrete and continuous dynamics. Unlike hybrid systems, hybrid games allow choices in the system dynamics to be resolved adversarially by different players with different objectives.

This talk describes how logic and formal verification can be lifted to hybrid games. The talk describes a logic for hybrid systems called differential game logic dGL. The logic dGL can be used to study the existence of winning strategies for hybrid games. We present a simple sound and complete axiomatization of dGL relative to the fixpoint logic of differential equations. We prove hybrid games to be determined and their winning regions to require higher closure ordinals and we identify separating axioms, i.e. axioms that distinguish hybrid games from hybrid systems.

Bio: Andre Platzer is an Assistant Professor in the Computer Science Department at Carnegie Mellon University. He is working on logic and verification for cyber-physical systems and led the development of the KeYmaera prover for hybrid systems. Andre Platzer has a Ph.D. from the University of Oldenburg, Germany, and received an ACM Doctoral Dissertation Honorable Mention and NSF CAREER award, and was named one of the Brilliant 10 Young Scientists by the Popular Science magazine and one of the AI's 10 to Watch by the IEEE Intelligent Systems Magazine.


Co-design of Control Algorithm and Embedded Platform for Building HVAC Systems

May 21, 2013, 4.10-5pm, Mehdi Maasoumy, EECS, UC Berkeley.

[Slides]

Abstract: The design of heating, ventilation and air conditioning (HVAC) systems is crucial for reducing energy consumption in buildings. As complex cyber-physical systems, HVAC systems involve three closely-related subsystems � the control algorithm, the physical building and environment and the embedded implementation platform. In the traditional topdown approach, the control algorithm and the embedded platform are in general designed separately leading to sub-optimal systems. We propose a co-design approach that analyzes the interaction between the control algorithm and the embedded platform through a set of interface variables. We present six control algorithms that take into account the sensing error, and model the relation of control performance and cost versus sensing error. We also capture the relation of embedded platform cost versus sensing error by analysis of the collected data from a testbed. Based on these models, we explore the co-design of the control algorithm and the temperature sensing subsystem of the embedded platform to optimize with respect to energy cost and monetary cost while satisfying the constraints for user comfort level.

Bio: Mehdi Maasoumy is a PhD candidate in Mechanical Engineering at University of California, Berkeley. His research interests include modeling and optimal control of linear, nonlinear, and hybrid systems with applications in energy efficient buildings, smart grid, and aircraft electric power systems. He has a M.Sc. degree from University of California, Berkeley and a B.Sc. degree from Sharif University of technology, both in Mechanical Engineering. He is the recipient of the Best Paper Award at the 4th ACM/IEEE international conference on cyber-physical systems and Outstanding Preliminary Examination Award from UC Berkeley. More information on his work is available at: http://eecs.berkeley.edu/~maasoumy/


Schedulability, Deadlock Freedom, and Performance Analysis of Timed Actors

June 4, 2013, 4.10-5pm, Marjan Sirjani, Reykjavik University, Iceland.

[Slides]

Abstract: Asynchronous event-based systems can be modeled using Timed Rebeca with least effort. Network and computational delays, periodic events, and required deadlines can be expressed in the model. The language is actor-based and the syntax is Java-like. Model checking and simulation tools are built based on the formal semantics of the language given as SOS rules. For deadlock-freedom and schedulability analysis a so-called floating-time transition system is introduced which reduced the state space by exploiting the isolation of method execution in the model. For performance analysis, a simulation tool is developed which uses McErlang as the backend. Simulation traces are saved in SQL database for further analysis. Case studies are used to show the applicability of the language and the tools.

Bio: Marjan Sirjani is an Associate Professor at Reykjavik University. Marjan and her research group are pioneers in building model checking tools, compositional verification theories, and state-space reduction techniques for actor-based models. She has been working on analyzing actors since 2001 using the modeling language Rebeca (www.rebeca-lang.org). She has also worked on coordination languages and is the co-author of the paper proposing Constraint Automata as the first compositional formal semantics for the coordination language Reo. Marjan has been the PC member and PC chair of several international conferences including Coordination, FM, FMICS, ICFEM, FSEN, and guest editor for special issues of the journals Science of Computer Programming and Fundamenta Informaticae. Before joining academia as a full-time faculty she has been the managing director of Behin System Company for more than ten years, developing software and providing system services. Marjan served as the head of the Software Engineering Department of School of Electrical and Computer Engineering at the University of Tehran prior to joining the School of Computer Science at Reykjavik University in 2008.

Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article
You are not logged in 
Contact 
©2002-2017 U.C. Regents