Forum
Presentations from Previous Semesters
Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article

Fall 2013 Seminars
Armin Wasicek, 13 Jan 2014
Last updated: 13 Jan 2014

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Fall 2013

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays from 4.10-5.00 p.m. in 521 Cory Hall (Hogan Room).

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Armin Wasicek.

Seminars from previous semesters can be found here.

NOTE: The link to the survey is here. Results will be posted after Dec 17. Thanks for participating!

Schedule

Dan Holcomb August 27, 2013
Chung-Wei Lin September 10, 2013
Christos Stergiou September 16, 2013
Armin Wasicek September 24, 2013
Alvaro Cardenas October 04, 2013
John Rushby October 08, 2013
Valerie Issarny October 15, 2013
Michael Tiller October 22, 2013
David Kleidermacher October 29, 2013
Wenchao Li November 12, 2013
Jorge Ortiz November 19, 2013
Alberto Puggelli November 26, 2013
Philipp Hennig December 03, 2013
Mehdi Maasoumy December 10, 2013

Formal Verification and Synthesis for Quality-of-Service in On-Chip Networks

Aug 27, 2013, 4.10-5pm, Dan Holcomb, UC Berkeley.

Slides

Abstract

Quality-of-Service (QoS) in on-chip communication networks has a tremendous impact on overall system performance in today's era of ever-increasing core counts. Yet, Networks-on-Chip (NoCs) are still designed and analyzed using RTL simulation or highly abstracted analytical models. Formal methods, used successfully for many other verification tasks, have traditionally not found use in QoS verification due to the scale of the problems of interest, such as verifying latency bounds of hundreds of cycles for large on-chip networks.

In this dissertation talk, I will present recent work toward leveraging formal methods for NoC synthesis and QoS verification. In particular, we address the problems of (1) optimal buffer sizing using a counterexample-guided approach, (2) using abstraction to verify end-to-end latency bounds in a mesh network, and (3) scalable latency verification using compositional inductive proofs.

Bio:

Dan Holcomb is a PhD candidate at UC Berkeley, advised by Professor Sanjit Seshia. His dissertation work has been developed with input received during two internships at Intel Strategic CAD Lab. Dan holds a BS and MS degree from UMass Amherst.


Cyber-Security for Controller Area Network and its Security-Aware Mapping

Sep 10, 2013, 4.10-5pm, Chung-Wei Lin, UC Berkeley.

Slides

Abstract

Cyber-security is a rising issue for automotive electronic systems, and it is critical to system safety and dependability. Current in-vehicles architectures, such as those based on the Controller Area Network (CAN), do not provide direct support for secure communications. In this work, we propose a security mechanism to help prevent cyber-attacks (masquerade and replay) in vehicles with architectures based on CAN. Then, when retrofitting those architectures with security mechanisms, a major challenge is to ensure that system safety will not be hindered, given the limited computation and communication resources. Therefore, we propose an optimal Mixed Integer Linear Programming (MILP) formulation for solving the mapping problem to meet both the security and the safety requirements. To the best of our knowledge, this is the first work to address security and safety in an integrated formulation in the design automation of automotive electronic systems. Experimental results of an industrial case study show the effectiveness of our approach.

Bio:

Chung-Wei Lin received the B.S. degree in computer science and the M.S. degree in electronics engineering from the National Taiwan University. He is currently a 5th-year Ph.D. student in the EECS Department, UC Berkeley. He is in the group of Professor Alberto Sangiovanni-Vincentelli. His research interest includes embedded systems and computer-aided design. He was a summer intern of General Motors in 2011, 2012, and 2013.


Schedulability and Verification of Real-Time Discrete-Event Systems

Sep 16, 2013, 10-11am, Christos Stergiou, UC Berkeley.

Slides

Abstract

Cyber-physical systems are systems where there is a tight interaction between the computing world and the physical world. In spite of the significance of time in the dynamics of the physical world, real-time embedded software today is commonly built using programming abstractions with little or no temporal semantics. PTIDES (Programming Temporally Integrated Distributed Embedded Systems) is a programming model whose goal is to address this problem. It proposes a model-based design approach for the programming of distributed real-time embedded systems, in which the timing of real-time operations is specified as part of the model. This is accomplished by using as an underlying formalism a discrete-event (DE) model of computation, which is extended with real-time semantics.

We address the schedulability question for PTIDES programs and uniprocessor platforms. A PTIDES program is schedulable if for all legal sensor inputs, there exists a scheduling of the PTIDES components that meets all the specified deadlines. The timing specifications allowed in the discrete-event formalism can be seen as a generalization of end-to-end latencies which are usually studied in the hard real-time computing literature. This results in a rather idiosyncratic schedulability problem. We first show that for a large subset of discrete-event models, the earliest-deadline-first scheduling policy is optimal. Second, we show that all but a finite part of the infinite state space of a PTIDES execution results in demonstrably unschedulable behaviors. As a result the schedulability problem can be reduced to a finite-state reachability problem. We describe this reduction using timed automata.

We next turn to the verification problem for DE systems themselves. We study a basic deterministic DE model, where actors are simple constant-delay components, and two extensions of it: first, one where actors are not constant but introduce non-deterministic delays, and second, one where actors are either deterministic delays or are specified using timed automata. We investigate verification questions on DE models and examine expressiveness relationships between the DE models and timed automata.

Finally, we discuss extensions of this work to cover a wider range of discrete-event actors, and propose an approach to design more efficient and practical analyses for schedulability testing.

Bio:

Christos Stergiou is a computer science PhD candidate at UC Berkeley, advised by Edward A. Lee and Koushik Sen. His research interests are in verification and synthesis of distributed and real-time systems. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens.


The ACROSS MPSoC: A New Generation of Multi-Core Processors Designed for Safety-Critical Embedded Systems

Sep 24, 2013, 4.10-5pm, Armin Wasicek, UC Berkeley, Vienna University of Technology, Austria.

Slides

Abstract

The ACROSS MPSoC represent the latest milestone in a research line to develop a dependable Multi-Core Processor platform that can host mixed-criticality applications with hard-real time constraints. The technical key innovation is a time-triggered Network-on-a-Chip (TTNoC), which provides temporal determinism and temporal and spatial partitioning. The ACROSS MPSoC supports applications up to the highest criticality classes, and facilitates efficient mixed-criticality integration and certification. In this talk, we will present the guiding design principles of the TTNoC that are inherited from the Time-Triggered Architecture (TTA). Next, we will show how these core concepts solve some of the issues that current multi-core architectures suffer in the real-time domain.

Bio:

Armin Wasicek is a Postdoctoral Researcher at the EECS Department at UC Berkeley. His research interests include Real-Time Systems, Model-based Design, the Time-Triggered Architecture, and Security. Armin has a received a Doctoral degree in 2012 and a Master's degree in 2006, bot from Vienna University of Technology. His dissertation was on "Security in Time-Triggered Systems" with Prof. Hermann Kopetz as a supervisor. In 2010, Armin received a price for "Innovation in Teaching" from TU Vienna for the lab course "Embedded Systems Engineering". In 2012, Armin was awarded a Marie Curie IOF Fellowship.

From 2010-2012, Armin worked in the ACROSS project as work package leader for WP2 "Middleware and System Components". Previously, he was a member of the RTCA SC-216 on aeronautical system security and he contributed to many national and international research projects, e.g., GENESYS, DECOS, TTSoC.


Short and Long-Term Research Challenges for Protecting Critical Infrastructure Systems

Oct 04, 2013, 2-3pm, Alvaro Cardenas, University of Texas at Dallas.

Slides

ROOM: 540 Cory Hall

Round Table

We have several lines of research for securing our critical infrastructures. One line of research tries to expand traditional security mechanisms with control-theoretic algorithms to prevent or detect and respond to attacks targeting the manipulation of sensor and actuators of control systems. Another line of research explores the use of Big Data analytics for anomaly detection in smart grid networks. We are also working on risk-assessment and incentives for asset owners to invest in protecting the smart grid and other critical infrastructures. Finally, big data in the smart grid, also raises the issue of consumer privacy, as consumer behavior will be available at unprecedented levels of granularity. We are working on privacy-preserving data collection mechanisms.

Bio:

Alvaro A. Cardenas is an Assistant Professor at the University of Texas at Dallas. He holds M.S. and Ph.D. degrees from the University of Maryland, College Park under the advise of John Baras, and a B.S. from Universidad de los Andes.

Prior to joining UT Dallas he was a research staff with Fujitsu Laboratories of America in Sunnyvale, CA, where he did research in Trusted Computing, Big Data, and Smart Grids; and prior to Fujitsu, he was a postdoctoral scholar at the University of California at Berkeley under the advise of Shankar Sastry and Doug Tygar, doing research in cyber-physical systems security and applications of Machine Learning to Intrusion Detection.

He has also been an invited visiting professor at the University of Cagliari in Italy where he taught a class on machine learning and game theory for security, an Intern working on discriminatory Bayesian networks at INRIA-LORIA in France, and a SCADA intern, working on ladder logic to replace old relay boxes in Occidental Petroleum Corporation in Cano Limon, Cobenas, Colombia.

He has received numerous awards for his research including a best paper award from the U.S. Army Research Office, a best presentation award from the IEEE, a graduate school fellowship from the University of Maryland, and a Distinguished Assistantship from the Institute of Systems Research


The Challenge of High-Assurance Software

Oct 08, 2013, 4.10-5pm, John Rushby, SRI International, Computer Science Laboratory.

Slides

Abstract

It is difficult to build complex systems that (almost) never go (badly) wrong, yet this is what we expect of airplanes and pacemakers and the phone system. In essence, we have to anticipate everything that could fail or go wrong, develop countermeasures, and then provide compelling evidence that we have done all this correctly.

I will outline some of the intellectual challenges in construction of suitable evidence, particularly as applied to software. I introduce the idea of "possibly perfect" software and its associated "probability of perfection" and describe how this relates to correctness and reliability. I will then show how these ideas can be used to provide a credible basis for certification.

I will also describe epistemic and logic uncertainties in high-assurance software and speculate on the relation between these and the notion of resilience.

Much of this material is based on Joint work with Bev Littlewood and others at City University UK. Some of it is described in a paper available here.

Bio:

Dr. John Rushby is a Program Director and SRI Fellow with the Computer Science Laboratory of SRI International in Menlo Park California, where he leads its research program in formal methods and dependable systems.

He joined SRI in 1983 and served as director of its Computer Science Laboratory from 1986 to 1990. Prior to that, he held academic positions at the Universities of Manchester and Newcastle upon Tyne in England. He received BSc and PhD degrees in computing science from the University of Newcastle upon Tyne in 1971 and 1977, respectively.

His research interests center on the use of formal methods for problems in the design and assurance of safe, secure, and dependable systems.

John Rushby received the IEEE Harlan D Mills Award in 2011 "for practical and fundamental contributions to Software & Hardware Reliability with seminal contributions to computer security, fault tolerance, and formal methods" and, together with Sam Owre and N. Shankar, the CAV Award in 2012 "for developing PVS (Prototype Verification System) which, due to its early emphasis on integrated decision procedures and user friendliness, significantly accelerated the application of proof assistants to real-world verification problems."


Emergent Middleware Facing the Interoperability Challenge

Oct 15, 2013, 4.10-5pm, Valerie Issarny, INRIA Paris - Rocquencourt, France.

Slides

Abstract

Interoperability is the ability for two systems to exchange, understand and use each other's data, and is a long-standing problem in the field of distributed systems. However, the emergence of pervasive computing and the Internet of Things have brought about new challenges to achieving universal interoperability. Extreme heterogeneity and spontaneous interactions are characteristics of today's complex distributed systems.

To face the above challenges, we have introduced the notion of Emergent Middleware as part of the European CONNECT project. Emergent middleware ensures interoperation between two networked systems by combining message interoperability, i.e., the ability to interpret messages from/toward networked systems and behavioral interoperability, i.e., the ability to mediate the interaction protocols run by the communicating networked systems, under specified non-functional properties, e.g., reliability, performance and security. On-the-fly production of emergent middleware then relies on the following key enablers within the network: discovery of networked systems, learning of interaction protocols and synthesis of protocol mediators.

In this talk, after an overview of the Emergent middleware principles, I will detail the proposed solution to automated mediator synthesis, which is based on ontology reasoning and constraint programming in order to infer mappings between components' interfaces automatically. These mappings guarantee semantic compatibility between the operations and data of the interfaces. Then, we analyse the behaviors of components in order to synthesise, if possible, a mediator that coordinates the computed mappings so as to make the components interact properly. Our approach is formally-grounded to ensure the correctness of the synthesized mediator. We demonstrate the validity of our approach by implementing the MICS (Mediator synthesIs to Connect componentS) tool and experimenting it with various real-world case studies.

Bio:

Valerie Issarny is "Directrice de recherche" at Inria where she heads the ARLES Inria research project-team at the Paris-Rocquencourt center. Her research interests relate to distributed systems, software architectures, pervasive computing/ambient intelligence systems and middleware. Her research is currently focused on the study of middleware-based systems for the mobile Internet of Things, with the study of scalable and interoperability solutions for the target systems. Valerie is further the current scientific manager of the Inria@SiliconValley program that aims at strengthening research collaborations between Inria and Californian Universities.


Model-Based System Engineering: Past, Present and Future

Oct 22, 2013, 4.10-5pm, Michael Tiller, Xogeny.

Slides

Abstract

Model-based System Engineering has been recognized, for some time, as a way for companies to improve their product development processes. However, change takes time in engineering and we still have only scratched the surface of what is possible. New ideas and technologies are constantly emerging that can improve a model-based approach. In this talk, I will discuss some of my past experiences with model-based system engineering in the automotive industry. I'll also discuss the shifts I see from numerical approaches to more symbolic approaches and how this manifests itself in a shift from imperative representations of engineering models to more declarative ones. I'll cover some of the interesting challenges I've seen trying to model automotive systems and how I think those challenges can be overcome moving forward. Finally, I'll talk about some of the exciting possibilities I see on the horizon for modeling.

Bio:

Dr. Michael Tiller received his Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign in 1995. After graduating, he worked at Ford Motor Company in the Powertrain Research Department. His work focused on modeling of engine and transmission systems. In particular, he worked on many applications involving the Ford Hybrid Escape, the first production hybrid in North America and the first production SUV in the world. He is an author of 8 patents worldwide for his work on the Ford Hybrid Escape. Dr. Tiller left Ford in 2005 to join Emmeskay, a Michigan based engineering consulting company, as Vice-President of Modeling R&D. In 2010, Emmeskay was acquired by LMS International. In 2011, Dr. Tiller joined Dassault Syst�¨mes HQ in Paris to become Worldwide Director of Marketing for PLM-Systems. In August 2012, he started his own company, Xogeny, to help companies accelerate their model-based system engineering processes through consulting and tool development.

Since 1999, Dr. Tiller has been involved with the Modelica modeling language and has been a member of the Modelica Association board since it was formed. In 2001 he wrote the first book on Modelica, "Introduction to Physical Modeling with Modelica". He has been active in the ongoing development of the Modelica language, the Modelica Standard Library and the recent FMI standard. In October of 2012, he launched a Kickstarter campaign to fund the writing of a new Modelica book to be released under a Creative Common license.


Perspectives on the future of trustworthy mobile computing

Oct 29, 2013, 4.10-5pm, David Kleidermacher, Green Hills Software.

Slides

Abstract

This talk will discuss the gap between contemporary mobile security and what is needed to realize a world in which our mobile devices replace our wallets, credit cards, passports, home keys, medical device controllers, and voting kiosks. Topics included trusted platforms, high assurance software development, and mobile virtualization.

Bio:

David Kleidermacher is Chief Technology Officer at Green Hills Software where he is responsible for technology strategy, platform planning, and solutions design. Mr. Kleidermacher is a leading authority in systems software and security, including secure operating systems, virtualization technology, and the application of high robustness security engineering principles to solve computing infrastructure problems. Mr. Kleidermacher earned his bachelor of science in computer science from Cornell University and is a frequent speaker and writer on technology subjects, including author of "Embedded Systems Security", Elsevier 2012.


Specification Mining - New Formalisms, Algorithms and Applications

Nov 12, 2013, 4.10-5pm, Wenchao Li, UC Berkeley.

Slides

Abstract

Specification is the first and arguably the most important step for formal verification and correct-by-construction synthesis. These tasks require understanding precisely a design's intended behavior, and thus are only effective if the specification is created right. For example,much of the challenge in bug finding lies in finding the specification that mechanized tools can use to find bugs. It is extremely difficult to manually create a complete suite of good-quality formal specifications, especially given the enormous scale and complexity of designs today. Many real-world experiences indicate that poor or the lack of sufficient specifications can easily lead to misses of critical bugs, and in turn design re-spins and time-to-market slips.

This dissertation presents research that mitigates this manual and error-prone process through automation. The overarching theme is specification mining - the process of inferring likely specifications by observing a design's behaviors. In the first part of the talk, I will describe two approaches to mine specifications dynamically from simulation or execution traces. The first approach offers a simple but effective template-based remedy to the aforementioned problem. The second approach presents a novel formalism of specification mining based on the notion of sparse coding, which can learn latent structures in an unsupervised setting. I will discuss tradeoffs and synergies of the two approaches and show how the mined specifications can be used to localize bugs effectively.

In the second part of the talk, I will focus on the problem of synthesis from temporal logic specifications. While offering the attractive proposition of correct-by-construction, this synthesis approach completely relies on the user to not only specify the intended behaviors of the system but also the assumptions on the environment. Environment assumptions are especially hard to get right since they are often implicit knowledge and are seldom documented. We address this problem by learning assumptions from the counterstrategies of an unrealizable specification to systematically guide it towards realizability. We further show that, the proposed counterstrategy-guided assumption mining framework enables the automatic synthesis of a new class of semi-autonomous controllers, called human-in-the-loop (HuIL) controllers. Lastly, I will discuss two efforts on broadening the scope of specification mining with human inputs, one through gamification and crowdsourcing and the other through natural language processing.

Bio:

Wenchao Li is a Ph.D. candidate in the Department of Electrical Engineering and Computer Science (EECS) at UC Berkeley, advised by Prof. Sanjit A. Seshia. He received his B.S. in EECS and B.A. in Economics also from UC Berkeley. He is broadly interested in designing, analyzing and improving dependable computing systems. He dreams of a world of bug-free designs.


A Platform Architecture for Sensor Data Processing and Verification in Buildings

Nov 19, 2013, 4.10-5pm, Jorge Ortiz, UC Berkeley.

Slides

Abstract

Buildings consume 72% of electricity and nearly half of the total energy produced in the United States. In order to improve their performance, deep sampling and analytics must be deployed at scale. However, each building is uniquely designed and highly heterogenous -- limiting portability and preventing widespread deployment of solutions.

In this talk, we discuss a platform architecture with two core services: a file system service and a verification service. The former offers a common namespace and standard access method for building sensors/actuators, their data, and data processing jobs. It also allows applications to write portable code. The latter is a verification service that verifies various system behavioral patterns and physical relationships between sensors in order to maintain an accurate representation of the building in software. We evaluate the effectiveness of our architecture and verification techniques through several real-world deployments across 7 buildings in two countries that support over half a dozen applications.

Bio:

Jorge Ortiz is a PhD Candidate in the Computer Science division at the University of California at Berkeley, working with Professor David Culler on operating systems and networking. He has an M.S. in Computer Science from UC Berkeley and a B.S. in Electrical Engineering and Computer Science from M.I.T. His research interests include energy-efficient buildings, sensor networks, and cyber-physical systems. He spent several years at Oracle and IBM prior to his time at Berkeley and worked on nano-satellite software at Nanosatisfi while studying at Berkeley. Jorge is a member of the Software Defined Buildings (SDB) group UC Berkeley and will be joining IBM Research at the end of this year.


Polynomial-Time Verification of PCTL Properties of MDPs with Convex Uncertainties and its Application to Cyber-Physical Systems

Nov 26, 2013, 4.10-5pm, Alberto Puggelli, UC Berkeley.

Slides

Abstract

Accurate modeling of cyber-physical systems (CPS) is a notoriously challenging problem due, among other reasons, to the intrinsic limitations in representing the physical part of the system, whose behavior can only be inferred through finite resolution measurements. The problem is even exacerbated in the context of the formal verification of these system, where the generated guarantees on the system properties are only as accurate as the model itself. In this talk, we present a framework that allows the formal verification of quantitative properties of CPS while taking into account uncertainties in the modeling process. We first present the model of Convex Markov Decision Processes (CMDPs), i.e., MDPs in which transition probabilities are only known to lie in convex uncertainty sets. These convex sets can be used to formally capture the uncertainties that are present in the modeling process. Using results on strong duality for convex programs, we then present a verification algorithm for Probabilistic Computation Tree (PCTL) properties of CMDPs, and prove that it runs in time polynomial in the size of a CMDP for a rich subclass of convex uncertainty models. This result allows us to lower the previously known algorithmic complexity upper bound for PCTL properties of CMDPs from co-NP to PTIME. We conclude the talk by describing how we recently applied the proposed technique to the verification of the behavior of car drivers performing complex maneuvers. Results show that we can quantitatively evaluate the performance of the drivers depending on their attention level, i.e., whether they are attentive or distracted while driving. The proposed framework is currently being added to the public distribution of PRISM, the state-of-the-art tool for the probabilistic verification of stochastic systems.

Bio:

Alberto Puggelli received the B.Sc. degree and the double M.Sc. degree in Electrical Engineering (summa cum laude) from Polytechnic of Milan and from Polytechnic of Turin, Italy, in 2006 and 2008, respectively. In 2009, he was with ST-Ericsson as an intern analog designer. In 2009, he joined the EECS department at the University of California at Berkeley, where he is currently pursuing the Ph.D. degree. From 2011 to 2013, he hold intern positions in the R&D and product divisions at Texas Instruments (Kilby Labs) and Lion Semiconductor. Alberto is the recipient of two Best Student awards at Polytechnic of Milan, the AEIT fellowship entitled to "Isabella Sassi Bonadonna" for the year 2010, and the 2012 Margarida Jacome GSRC Best Poster Award. He also serves as a reviewer for the Transactions on Circuit and Systems II. His research interests include methodologies for the design of energy-efficient SoCs and techniques for the formal verification of nonlinear and stochastic hybrid systems.


Towards Probabilistic Numerical Methods

Dec 03, 2013, 1.10-2pm, Philipp Hennig, Max Planck Institute for Intelligent Systems, T�¼bingen, Germany.

Slides

TIME: 1-2pm

Abstract

Algorithms for optimization, quadrature, and the solution of differential equations are a mainstay of applied mathematics. During the past century they evolved into mature, separated fields. Machine learning is just an upstart compared to these sophisticated disciplines, but it offers a uniquely unified view on them: From the point of view of machine learning, numerical methods are learning methods, which infer an intractable ("latent") property (roots, integrals, ODE solutions) of a function, by combining structural assumptions (a "prior") with evaluations ("data") of the function at finitely many points. I will present some recent work on the kinds of insights this view allows, based on the specific examples of quasi-Newton methods in optimization, and the Runge-Kutta family for ODEs. I will argue that treating numerical methods as learning algorithms opens up avenues to unification of ideas, functional extensions, and performance improvements. While the connection between numerical methods and inference has been known since Poincar�©, it is only beginning to be seriously studied now, so much of the research I will present is in early stages. It opens up more, and more fundamental, questions than it answers.

Bio:


Controlling Energy-Efficient Buildings in the Context of the Smart Grid: A Cyber Physical System Approach.

Dec 10, 2013, 4.10-5pm, Mehdi Maasoumy, UC Berkeley.

Slides

Abstract

Building sector is responsible for about 40% of energy consumption, 40% of greenhouse gas emissions, and 70% of electricity use, in the US. Over 50% of the energy consumed in buildings is directly related to space heating, cooling and ventilation. Optimal control of heating, ventilation and air conditioning (HVAC) systems is crucial for reducing energy consumption in buildings.

In the first part of the talk, we present a novel Parameter Adaptive Building (PAB) model framework to update the model parameters on the fly, and reduce model uncertainty. We then present a Model Predictive Control (MPC), and a Robust Model Predictive Control (RMPC) algorithm and a methodology for selecting a controller type, i.e. RMPC, MPC, and rule based control (RBC) as a function of building model uncertainty.

In the second part of the talk, we address the �Cyber-Physical� aspect of a building HVAC system in the design flow. We present a co-design framework that analyzes the interaction between the control algorithm and the embedded platform through a set of interface variables, and demonstrate how the design space is explored to optimize the energy cost and monetary cost, while satisfying the constraints for user comfort level.

The third part of my talk will be centered on the role of Smart Buildings in the context of Smart Grid. Commercial buildings have inherent flexibility in how their HVAC systems consume electricity. We first propose a means to define and quantify the flexibility of a commercial building. We then present a contractual framework that could be used by the building operator and the utility to declare flexibility on the one side and reward structure on the other side. We also present a control mechanism for the building to decide its flexibility for the next contractual period to maximize the reward, given the contractual framework. Finally, at-scale experiments are presented to demonstrate the feasibility of the proposed algorithm.

Bio:

Mehdi Maasoumy is a PhD Candidate at the Mechanical Engineering Department of University of California, Berkeley. His research interests include modeling and model-based optimal control of linear, nonlinear and hybrid systems, with applications in Energy-Efficient Buildings, Smart Grid and Aircraft Electric Power Distribution System. He has a B.S. degree from Sharif University of Technology in 2008 and a M.S. degree from University of California, Berkeley in 2010, both in Mechanical Engineering. He is the recipient of the Best Student Paper Award at the International Conference on Cyber-Physical Systems (ICCPS) 2013, the Best Student Paper Award Finalist at the ASME Dynamics and Control System Conference (DSCC) 2013, and the Outstanding Preliminary Examination Award from University of California, Berkeley.


Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article
You are not logged in 
Contact 
©2002-2017 U.C. Regents