Spring 2012 Seminars
David Broman, Linköping University, Sweden, UC Berkeley
Abstract: Cyber-physical systems (CPS) is an emerging research field that addresses challenges with systems that combine computation, networking, and physical processes. The concept of time is an inherent property of such a system. Missed deadlines for hard real-time applications, such as avionics and control systems in automobiles, can result in devastating life-threatening consequences. Hence, timing predictability for CPS is a correctness criterion, not a quality factor. Extensive research has been performed on defining high-level modeling languages where time is a fundamental concept (e.g., Ptolemy II, Modelica, Simulink). Moreover, currently a new category of processors called precision timed (PRET) machines is being developed that are both predictable and repeatable regarding time. However, there is a semantic gap between high-level languages and PRET machines. In this talk we present research challenges for a new research project in the Ptolemy group, with the objective of bridging this gap. The aim is to establishing a new formal foundation of timing predictability for the semantics of correct translation/compilation from high-level CPS modeling languages down to machine code for PRET machines. The research includes formal proofs of correctness by utilizing computer-based proof assistants as well as implementation of a prototype compiler for practical testing and evaluation.
Bio: David Broman is a visiting scholar at UC Berkeley, working in the Ptolemy group at the Electrical Engineering & Computer Science department. He is an assistant professor at Linköping University in Sweden, where he also received his PhD in computer science in 2010. David is part of the design group developing the Modelica language and he is the main designer of the Modeling Kernel Language (MKL), a domain-specific host language for modeling cyber-physical systems. David's research interests include programming and modeling language theory, software engineering, and mathematical modeling and simulation of cyber-physical systems. He has worked five years within the software security industry and is a member of the Modelica Association.
Adam Cataldo, LinkedIn
Abstract: This talk is all about a technical effort I've led to make LinkedIn's engineers more productive. I'll start with an overview of the LinkedIn architecture and how engineers have traditionally gone about making changes. I'll then give an explanation of what Quick Deploy is, and how it speeds up the development cycle for our engineers. This was a big project, and we ran into many gotchas along the way. I'll explain some of the more interesting problems we had to solve, and explain some of the lessons we've learned that might be worth considering when you go build the next big viral technology.
Bio: Adam Cataldo works on the infrastructure to keep LinkedIn scaling as it continues to grow. He's an EECS alumni (PhD 2006 in Edward Lee's group), and he's excited to come back and talk about the cool stuff he's been doing in industry.
Per Ljung, Nokia
Abstract: Tomorrow's promise of mobile devices is always-on computation and communication to enrich our lives. Some of these always-on apps might include: better awareness with augmented reality, individualized suggestions from personal agents, better health with caloric and fitness tracking, multi-media life-blogging, and immersive user interfaces. But there is a slight problem. The battery life of today's mobile devices is typically a day or so -- if we're lucky. Modern handsets typically have a 5Wh battery and use about 0.5W for the display, 1W for the radio, and 1W for the cpu. In worst case this is only 2h of battery life. To enable a full day's use with always-on apps, we need to provide for at least a 10x increase in energy efficiency. This talk will review opportunities in energy sources and sinks for mobile devices, including batteries, displays, communication and computation subsystems to identify several possible 2x, 10x and 100x improvements. The Nokia Research Berkeley lab is optimistic, and we are currently prototyping mobile devices with dramatic improvements in energy efficiency.
Bio: Per Ljung is the team lead of Performance Efficient Mobile Platforms (PEMP) at Nokia Research in Berkeley. He was the founder of two startups working on Darpa funded projects, first with 3D coupled field CAD using accelerated multipole boundary elements and later with high-level correct-by-construction synthesis of synchronous and asynchronous hw/sw systems. Ljung has a PhD from UC Berkeley and a MSc from Kungliga Tekniska Högskolan in Stockholm Sweden.
Stephen Weston, JP Morgan
Abstract: Working across multiple asset classes using discrete mathematics, results are reported from an on-going project to create dataflow computational engines that employ fine-grained parallelism to accelerate complex financial models for trading and risk management.
Bio: Stephen currently heads a group called Applied Analytics within the investment banking division of JP Morgan. The purpose of the group is to accelerate analytic models for trading and risk management at JP Morgan. Prior to his current position he ran the credit hybrids Quantitative Research group in JP Morgan. Before joining JP Morgan Stephen had roles in risk management and analytics at Deutsche Bank, Credit Suisse, UBS and Barclays. Prior to investment banking, Stephen was an academic and taught economics, finance and quantitative methods. Stephen holds a PhD in finance from Cass Business School in London.
Anil Aswani, UC Berkeley
Abstract: Partially engineered systems feature interactions between designed and undesigned components, and they are often found in the energy and healthcare domains. Achieving high efficiency and performance in such systems is challenging because of the difficulty in identifying accurate models. For instance, heating, ventilation, and air-conditioning (HVAC) modulates building environment to ensure occupant comfort. And though HVAC can be described by simple physics-based processes, the impact of building occupants makes it difficult to create models for energy-efficient HVAC control. This talk provides examples of two new techniques that rigorously combine statistical methods with control engineering, for the purpose of identification, analysis, and control of partially engineered systems for which models are not well known a priori. The first is a regression technique for identifying linear or local-linear models of systems, which can better reduce noise when the measurements have either manifold or collinear structure by leveraging theory from differential geometry. The second is a control method called learning-based model predictive control (LBMPC) that merges statistical methods into a control framework---allowing statistics to improve performance through identification of better system models, while providing theoretical guarantees on the robustness and stability of the closed-loop control. The improvements in modeling and control possible with these techniques are illustrated with applications to energy-efficient HVAC systems and high-performance control of semi-autonomous systems. Potential applications to the design of new medical treatments for cancer are also described.
Bio: Anil Aswani is currently a postdoctoral researcher in Electrical Engineering and Computer Sciences (EECS) at UC Berkeley. He received his Ph.D. in 2010 from UC Berkeley in EECS, with a Designated Emphasis in Computational and Genomic Biology. In addition, he completed an M.S. in 2007 from UC Berkeley in EECS and a B.S. in Electrical Engineering from the University of Michigan, in Ann Arbor, in 2005.
Senel Murat, Bosch
Abstract: The vision of Internet of Things (IoT) calls for connectivity not only to consumer electronics and home appliances, but also to small battery powered devices which cannot be recharged. Such small devices, often various types of sensors and actuators, are required to sustain reliable operation for years on batteries even in the presence of heavy interference. The IEEE 802.11 standard has established itself as one of the most popular wireless technologies offering connectivity. For many years, Wi-Fi technology has not been considered as an option for low-power wireless sensing applications because it was not designed for energy efficiency. However, multiple companies have recently developed power-efficient Wi-Fi components with appropriate system design and usage models targeting IoT. Low-power Wi-Fi provides a significant improvement over typical Wi-Fi In terms of power consumption, and battery lifetime heavily depends on operating scenario. Interference studies show that both in-network and out-of-network interferences do not affect reliable communication of sensor devices. The communication range is directly related to the link data rate. In a typical residential building, a single AP operating at 1Mbps, even if not installed in an optimal location, can provide full coverage for all potential sensor locations. This is a joint work between Serbulent Tozlu, Murat Senel, Abtin Keshavarzian and Wei Mao.
Bio: Serbulent Tozlu graduated from Istanbul University as an Electronics Engineer. He then received his M.Sc. degree specializing in Computer Networks from Department of Electrical Engineering at University of Southern California in 2000. After graduation, he started his engineering career as a Research Engineer at Bosch Research & Technology Center located in Palo Alto, CA. As a researcher, he gained several years of experience in various topics like embedded systems, in-car networking, GPS systems, PIR/uW based intrusion detection systems. Later as a member of the Wireless group within Bosch Research, he worked on low-power wireless protocols for residential/commercial building environments specializing in the field of Internet of Things / Web Enabled Sensors with a focus on low-power Wi-Fi systems and 6LoWPAN technology. As of February 2012, he is responsible for IP Services Analysis at Turk Telekom. Murat Senel received his PhD degree in Electrical and Computer Eng. at Purdue University, West Lafayette, IN and B.Sc. degrees in Electrical Eng. and Industrial Eng. at Bogazici University, Turkey, respectively. In 2008, he joined Bosch Research & Technology Center and works on an R&D project on wireless condition monitoring for industrial applications. His research interests are in the area of wireless communication, in particular he is interested in integrating wireless sensor networks into the Internet.
Andrew Tinka, UC Berkeley
Abstract: Actuated mobile sensing in large-scale systems is undergoing a revolution driven by the same technologies which enable smartphones: highly available connectivity, gigahertz-scale embedded processors, and low power electronics. The Floating Sensor Network project at UC Berkeley has developed a fleet of 100 lightly actuated, inexpensive robotic sensors for operations in river and estuary environments. In this talk, I will describe my work as the founding member of the Floating Sensor Network project to design and build both the fleet and the streaming data analytics system required for its successful operation. The research presented includes contributions in systems design, data analysis, planning, and control. The Floating Sensor Network devices incorporate GPS, GSM, 802.15.4 radios, and embedded computer modules to gather data from the environment. The streamed data is integrated into a Partial Differential Equation (PDE) model of shallow water flows, using a tractable quadratic programming formulation. A Zermelo-Voronoi framework is used to drive a decentralized coverage-based gradient descent algorithm for fleet planning using the estimated flow fields. Individual sensors use an optimal control scheme based on the Hamilton-Jacobi-Bellman PDE to achieve their position targets while guaranteeing a safe distance from obstructions in the environment under bounded disturbances. Experimental deployments of the Floating Sensor Network include the Sacramento/San Joaquin Delta and the San Francisco Bay, as well as a disaster response exercise in Stillwater, Oklahoma in cooperation with the US Army Corps of Engineers and the Department of Homeland Security. Applications of the Floating Sensor Network for disaster response, contamination tracing, and estuarial hydrodynamics studies will be discussed.
Bio: Andrew Tinka received the Bachelor of Applied Science in Engineering Physics from the University of British Columbia in 2002. He worked at Powis Parker, Inc. and the Center for Collaborative Control of Unmanned Vehicles for four years as an embedded systems engineer. He earned an M.S. degree in Civil and Environmental Engineering at UC Berkeley and is currently a PhD candidate in Electrical Engineering at UC Berkeley. Andrew Tinka is the recipient of NASA LAUNCH’s “Top Ten Innovators in Water” award and the Center for Entrepreneurship and Technology’s Cleantech Innovation Award for his work on the Floating Sensor Network. His research interests include applied control theory and optimization for actuated sensors in distributed sensing and data analytics problems.
Liangpeng Guo, UC Berkeley
Abstract: To cope with the increasing number of advanced features (e.g., smart-phone integration and side-blind zone alert.) being deployed in vehicles, automotive manufacturers are designing flexible hardware architectures which can accommodate increasing feature content with as fewer as possible hardware changes so as to keep future costs down. In this paper, we propose a formal and quantitative definition of flexibility, a related methodology and a tool flow aimed at maximizing the flexibility of an automotive hardware architecture with respect to the features that are of greater importance to the designer. We define flexibility as the ability of an architecture to accommodate future changes in features with no changes in hardware (no addition/replacement of processors, buses, or memories). We utilize an optimization framework based on mixed integer linear programming (MILP) which computes the flexibility of the architecture while guaranteeing performance and safety requirements.
Bio: Liangpeng Guo is a current graduate student at UC Berkeley. He worked for GM Advanced Technology division as an intern. His research interests are system level modeling and evaluation of complexity, flexibility, and scalability for Electronic Control System (ECS).
Isaac Liu, UC Berkeley
Abstract: Cyber-Physical Systems (CPS) are integrations of computation with physical processes. An number of applications can potentially benefit from the potential of CPS. However, these systems must be equipped to handle the inherent concurrency and inexorable passage of time of physical processes. The traditional computing abstractions only concern themselves with the functional aspects of a program, and not its timing properties. Thus, nearly every abstraction layer has failed to incorporate time into its semantics; the passage of time is merely a consequence of the implementation. When the temporal properties of the system must be guaranteed, designers must reach beneath the abstraction layers. This not only increases the design complexity and effort, but the designed systems are brittle and extremely sensitive to change. In this work, we re-examine the ISA layer and its affects on microarchitecture design. The ISA defines the contract between software instructions and hardware implementations. However, modern ISAs do not specify timing properties of the instructions as part of the contract, thus, architecture designs have largely implemented techniques that improve average performance at the expense of execution time variability. This leads to imprecise WCET bounds that limit the timing predictability and timing composability of architectures. In order to address the lack of temporal semantics in the ISA, we propose instruction extensions to the ISA that give temporal meaning to the program. The instruction extensions allow programs to specify execution time properties in software that must be observed for any correct execution of the program. In addition, we present the Precision Timed ARM (PTARM) architecture, a realization of Precision Timed (PRET) machines that provide timing predictability and composability without sacrificing performance. PTARM employs a predictable thread-interleaved pipeline with an exposed memory hierarchy that uses scratchpads and a predictable DRAM controller. This removes timing interference amongst the hardware threads, enabling timing composability in the architecture, and provides deterministic execution times for all instructions within the architecture, enabling timing predictability in the architecture. We show that the predictable thread-interleaved pipeline and DRAM controller also achieves better throughput compared to conventional architectures when fully utilized, accomplishing our goal to provide both predictability and performance. To show the applicability of the architecture, we present two applications implemented with the PRET architecture that utilize the predictable execution time and the timing extended ISA to achieve its design requirements. With this work, we aim to provide a deterministic foundation for higher abstraction layers, which enables more efficient designs of safety-critical cyber-physical systems.
Bio: Isaac Liu is a graduating Ph.D candidate at UC Berkeley. His advisor is Edward A. Lee, and his research interests are real-time embedded systems and computer architectures. He obtained his B.S. degree in Computer Engineering at UC Santa Barbara, and has previously had internships at Microsoft, NVIDIA, and National Instruments.
John Kubiatowicz, UC Berkeley
Abstract: The brave new world of ubiquitous manycore computing systems leads us to reexamine the design of operating systems in the pursuit of responsiveness, realtime guarantees, power efficiency, security, and correctness. In this talk, I will argue that "two-level scheduling" -- the explicit separation of resource allocation and use -- permits an easier framework in which to construct adaptable systems that can provide guaranteed behavior. I will talk about a new resource-aware OS, called "Tessellation", that embodies the notion of "two-level scheduling". I will introduce a new OS primitive, called a "Cell", which is the basic unit of isolation, protection, and scheduling. Cells contain guaranteed fractions of system resources, including gang-scheduled groups of processors, as well as guaranteed fractions of system resources such as caches and memory bandwidth. I will describe "Pacora," a framewor! k for ad apting Cell behavior based on user policies, and describe how Cells provide optimal resource usage through custom user-level schedulers. Ultimately, I'll describe a vision of a hierarchical resource-allocation architecture that spans all levels from the local (person-area networks) to the Cloud.
Bio:John Kubiatowicz is a Professor of EECS at the University of California at Berkeley. Prof. Kubiatowicz received a dual B.S in Physics and Electrical Engineering (1987), as well as an MS in EECS (1993) and PhD in EECS (1998), all from MIT. Kubiatowicz was chosen as one of Scientific American's top 50 researchers in 2002, one of US News and World Report's "people to watch for 2004", and is the recipient of an NSF PCASE award (2000). Professor Kubiatowicz was also co-founder of the $3M DARPA QUIST Quantum Architecture Research Center, which received the DARPATech Most Significant Technical Achievement award in 2002. Kubiatowicz's research interests include manycore Operating Systems, multiprocessor and manycore CPU designs, Internet-scale distributed systems, and quantum computing design tools and architectures.
Ben Lickly, UC Berkeley
Abstract: This thesis demonstrates a correct, scalable and automated method to infer semantic concepts using lattice-based ontologies, given relatively few manual annotations. Semantic concepts and their relationships are formalized as a lattice, and relationships within and between program elements are expressed as a set of constraints. Our inference engine automatically infers concepts wherever they are not explicitly specified. Our approach is general, in that our framework is agnostic to the semantic meaning of the ontologies that it uses. Where practical use-cases and principled theory exist, we provide for the expression of of infinite ontologies and ontology compositions. We also show how these features can be used to express of value-parametrized concepts and structured data types. In order to help find the source of errors, we also present a novel approach to debugging by showing simplified errors paths. These are minimal subsets of the constraints that fail to type-check, and are much more useful than previous approaches in finding the cause of program bugs. We also present examples of how this analysis tool can be used to express analyses of abstract interpretation; physical dimensions and units; constant propagation; and checks of the monotonicity of expressions.
Bio: Ben Lickly is a graduating Ph.D candidate at UC Berkeley. His advisor is Edward A. Lee, and his research interests include static analysis, concurrency models, programming languages, and compiler optimizations. His B.S. degree is from Harvey Mudd College, and he has interned at JPL, Google, and Seoul National University.
Stéphane Lafortune, University of Michigan
Abstract: We present a control engineering approach to the elimination of certain classes of concurrency bugs in concurrent software. Based on a model of the source code extracted at compile time and represented in the form of a Petri net, control techniques from the field of discrete event systems are employed to analyze the behavior of the concurrent program. The property of deadlock-freedom of the program is mapped to that of liveness of its Petri net model, called a Gadara net. A new methodology for Iterative COntrol of Gadara nets (labeled ICOG), is developed for synthesizing control logic that is liveness-enforcing and maximally-permissive. ICOG relies upon the technique of Supervision Based on Place Invariants developed for Petri nets subject to linear inequality constraints. The synthesized control logic, in the form of monitor places, is then implemented by program instrumentation. At run-time, the control logic will provably enforce deadlock-freedom of the program without altering its control flow. The results presented pertain to the case of circular-wait deadlocks in multithreaded programs employing mutual exclusion locks for shared data. However, the methodology can be applied to tackle other classes of concurrency bugs. This is collaborative work with the members of the Gadara team: http://gadara.eecs.umich.edu
Bio: Stéphane Lafortune is a professor in the Department of Electrical Engineering and Computer Science at the University of Michigan. He obtained his degrees from Ecole Polytechnique de Montreal (B.Eng), McGill University (M.Eng), and the University of California at Berkeley (PhD), all in electrical engineering. He is co-author of the textbook "Introduction to Discrete Event Systems" (Springer, 2008).
Christoph Kirsch, University of Salzburg, Austria
Abstract:We present a brief overview of state-of-the-art work in the engineering of digital systems (hardware and software) where traditional correctness requirements are relaxed, usually for higher performance and lower resource consumption but possibly also for other non-functional properties such as more robustness and less cost. The work presented here is categorized into work that involves just hardware, hardware and software, and just software. In particular, we discuss work on probabilistic and approximate design of processors, unreliable cores in asymmetric multi-core architectures, best-effort computing, stochastic processors, accuracy-aware program transformations, and relaxed concurrent data structures. As common theme we identify, at least intuitively, “metrics of correctness” which appear to be important for understanding the effects of relaxed correctness requirements and their relationship to performance improvements and resource consumption. The focus of the talk will be on relaxed concurrent FIFO queues that outperform and outscale existing algorithms on state-of-the-art multicore hardware. This is joint work with Michael Lippautz, Hannes Payer, and Ana Sokolova.
Bio: Christoph Kirsch is full professor and holds a chair at the Department of Computer Sciences of the University of Salzburg, Austria. Since 2008 he is also a visiting scholar at the Department of Civil and Environmental Engineering of the University of California, Berkeley. He received his Dr.Ing. degree from Saarland University, Saarbruecken, Germany, in 1999 while at the Max Planck Institute for Computer Science. From 1999 to 2004 he worked as Postdoctoral Researcher at the Department of Electrical Engineering and Computer Sciences of the University of California, Berkeley. His research interests are in concurrent programming and systems, virtual execution environments, and embedded software. Dr. Kirsch co-invented the Giotto and HTL languages, and leads the JAviator UAV project for which he received an IBM faculty award in 2007. He co-founded the International Conference on Embedded Software (EMSOFT), has been elected ACM SIGBED chair in 2011, and is currently associate editor of ACM TODAES.
Alain Girault, INRIA, France
Abstract: For autonomous critical real-time embedded systems (e.g., satellite), guaranteeing a very high level of reliability is as important as keeping the power consumption as low as possible. We propose an off-line scheduling heuristic which, from a given software application graph and a given multiprocessor architecture (homogeneous and fully connected), produces a static multiprocessor schedule that optimizes three criteria: its length (crucial for real-time systems), its reliability (crucial for dependable systems), and its power consumption (crucial for autonomous systems). Our tricriteria scheduling heuristic, called TSH, uses the active replication of the operations and the data-dependencies to increase the reliability, and uses dynamic voltage and frequency scaling to lower the power consumption. We provide extensive simulation results to show how TSH behaves in practice: Firstly, we run TSH on a single instance to provide the whole Pareto front in 3D; Secondly, we compare TSH versus the ECS heuristic (Energy-Conscious Scheduling) from the literature; And thirdly, we compare TSH versus an optimal Mixed Linear Integer Program.
Bio: Alain Girault holds a senior research fellow position at INRIA, the French National Institute for Research in Computer Science and Control. He received his PhD from the National Polytechnic Institute of Grenoble in January 1994 after doing his research in the Verimag laboratory. He was a postdoc researcher, in the ESTEREL team in INRIA Sophia-Antipolis in 1995, then in the PTOLEMY group at UC Berkeley in 1996, and in the PATH project at UC Berkeley in 1997. In 2008, he was a visiting scholar in the Department of Electrical and Computer Engineering of the University of Auckland, New Zealand. Since 2005, he has been head of the POP ART team at INRIA Grenoble Rhône-Alpes, that focuses on formal methods for embedded systems. His research interests include the design of reactive systems, with a special concern for distributed implementation, fault-tolerance, reliability, low power, discrete controller synthesis, and data-flow programming.
|You are not logged in|
|©2002-2018 U.C. Regents|