EECS 144/244

Fall 2015

Contents
Home
Overview
Logistics

Lectures
Homeworks
Reading
Resources
Projects
Fall 2015 edition
Fall 2014 edition
Fall 2013 edition
Spring 2013 edition
Fall 2011 edition

bCourses

Course Development
Wiki
SVN

Course Projects

Out of date, will be updated soon for Fall 2015.

The class requires a team project with teams of one or two students. Suggested projects include (the [Names] of the mentors of each project are listed after the project title):

Note: two new projects have just been added, proposed by the National Instruments research facility in Berkeley. If you're interested in any of these two projects let Prof. Tripakis know.

  1. Scheduling real time applications on distributed time sensitive networks [NI Berkeley].

    Time sensitive networking (TSN) architectures are increasingly adopted for real time systems in automobiles, avionics, and industrial control. For example, real time Ethernet is a promising technology that adds bandwidth reservations, latency guarantees, and time synchronization to the popular Ethernet standard for LANs. Future high performance system design will need to address the challenge of designing these time sensitive networks for complex real time applications that inter-connect hundreds of heterogeneous computing and switching nodes. Most real time applications have multiple concurrent interacting tasks with strict period, deadline, and latency constraints. A key problem is to jointly determine network topology, bandwidth reservations, and stream transmission times that satisfy application constraints. Related problems are to determine a suitable model of computation to capture real time tasks and streams and develop algorithms to compute stream transmission schedules. This project will study design techniques and algorithmic solutions for these problems. The main deliverable will be a software tool to explore time sensitive network architectures and evaluate scheduling algorithms.

    References:

    • M. D. J. Teener, A. N. Fredette, C. Boiger, P. Klein, C. Gunther, D. Olsen, and K. Stanton. Heterogeneous Networks for Audio and Video: Using IEEE 802.1 Audio Video Bridging. Proceedings of the IEEE, 101(11):2339–2354, November 2013.
    • W. Steiner. TTEthernet: Time-triggered services for Ethernet networks. In Digital Avionics Systems Conference, 2009. DASC ’09. IEEE/AIAA 28th, pages 1.B.4–1–1.B.4–1, Oct 2009.

  2. Debugging in high level synthesis: a tool to identify important variables and expose them for debug [NI Berkeley].

    High-level synthesis is gaining popularity in both academic and industrial design flows as it alleviates some of the issues associated with traditional RTL-based design flows. Unfortunately, the automatic mapping from a "high-level" language such as C or C++ to hardware makes debugging difficult as program variables and state are obfuscated during the compilation process. Your flow would automatically detect variables that seem relevant for debug and "pin out" those variables so that user can observe them. There are many different ways of tackling this problem. Observability and controllability from digital testing might be a good place to start. The XPilot system from UCLA would also be a good HLS framework to use.

    Reference:

  3. Verification of automotive controllers [Tripakis].

    The goal of the project is to study a set of Simulink benchmarks provided by our industrial partner Toyota, and check properties of interest on these models. Different simulation/testing/verification techniques can be used (part of the project to figure out which ones work and which ones don't).

    References (explaining the benchmarks above):

  4. Component theory with refinement for liveness properties [Tripakis].

    Component theories with refinement allow to specify the components of a system separately, and then reason about their composition (e.g., when are two components compatible?) and their refinement (when can one component substitute another one?). Many such theories exist, but most deal only with safety properties which can express that the system does not have "bad" behaviors. Recently a theory has been proposed which can also deal with liveness properties [EMSOFT 2014 paper below]. This project will study algorithmic problems in this framework, and in particular, how to check compatibility of components specified in LTL (linear temporal logic). Checking compatibility will involve dealing with LTL extended with quantifiers, and possibly also Buchi automata (see paper by Vardi et al, below).

    References:


  5. Flexible Contracts for Resilient Systems [Mark McKelvin, NASA JPL].

    Resiliency is a property of modern control systems to maintain awareness of system state and an acceptable level of operation in response to system disturbances, including threats to security and safety. Formalisms for expressing resilience are defined in terms of invariant and flexible assertions. Invariant assertions are fixed system constraints that are known a priori, whereas a flexible assertion is one that is learned during system operation and can accomodate unpredicted system behaviors.

    One approach is the use of contract to express flexible assertions in resilient systems. Contract-based design is an emerging paradigm for system-level design. It is an approach where the design process is a viewed as a successive assembly of components where a component is represented in terms of assumptions about its environment and guarantees about its behavior. In resilient systems, guarantees need to be changed based on the system's operational environment, such as system usage, available system components, system mode, and human operator events.

    In this project, we will investigate the development and use of "flexible" contracts for expressing system resiliency features. In particular, we would like to answer the following questions:

    1. How do we represent flexible (self-adapting) contracts?
    2. How do we use trained contracts to detect fault conditions and influence fault tolerant or system recovery strategies?

    References:


  6. Model-Based Runtime Verification [Mark McKelvin, NASA JPL].

    In practice, system verification is a challenging task for Cyber-Physical Systems. The system verification is often a manual process that involves many engineers. Functional and non-functional requirements, such as timing, robustness, and resource consumption, of the system are specified in text. A set of tests are developed from the requirements. Once the system is constructed, a team of engineers manually observes the system behavior when given a set of test inputs. This process is tedious, error-prone, and problems are not discovered until system integration and test.

    In runtime verification, the actual execution of the system via simulation of high-fidelity models or hardware-in-the-loop simulations is used to verify a formal specification of system requirements. A correctness property of a system requirement is expressed in a formal language, which makes it possible to automatically translate the property into an observer. The observer is then used to check a system execution with respect to that property. Runtime verification is an interesting approach since in practice, simulation models are readily available.

    In this project, the students will investigate the use of formal methods to support runtime verification of Cyber-Physical Systems. Given a set of system requirements expressed in a formal language and a simulation or physical implementation of the system, synthesize a set of tests and an observer that can be used to automatically verify that the system under test conforms to the initial requirements. Different approaches may be taken, but one approach may be to adopt the work by Juniwal et al. to system test and verification.

    References:


  7. Checking view consistency w.r.t. temporal abstractions [Tripakis].

    Systems engineering typically involves multiple stakeholders and design teams working on the same project. These teams come from different disciplines and each have their own viewpoint. They each use different modeling languages and tools to capture their design. A key problem is how to ensure view consistency, namely, that the various models produced by the various teams do not contradict each other. The problem is challenging because the different models/views "live in different worlds" and are hard to relate.

    In recent work [paper by Reineke-Tripakis, below] a formal definition of the view consistency problem has been proposed, and an algorithm to check consistency for discrete systems was given. This algorithm applies to the case where views are projections of (parts of) system behavior. For instance, the system may have three state variables x,y,z whereas a view of the system may involve only two of those variables, x,y. Projection is one possible abstraction, but there are others. In particular, so-called round abstraction [reactive modules paper by Alur-Henzinger, below] is a temporal abstraction, which changes the time behavior of a system. Essentially, some micro-steps are forgotten, and only some macro-steps are preserved. The goal of this project is to study the view consistency problem with respect to temporal abstractions, instead of projections. Is the problem decidable? What is its computational complexity? What are efficient algorithms? Etc.

    References:


  8. Bridging the gap between reactive synthesis and supervisory control: partial observability case [Tripakis].

    Computer-aided synthesis deals with automatically generating (synthesizing) controllers, programs, or other implementation artifacts (the how) from high-level specifications such as temporal logic formulas (the what). Two separate research communities have developed in the past, both working on controller synthesis problems: the community of formal methods and reactive synthesis and that of discrete-event systems and supervisory control. The goal of this project is to bridge the gap between these two communities, by showing examples of problems from one community which can be reduced to problems of the other. The project will build upon the initial work described in the paper below, extending this work from the full observability setting to a partial observability setting.

    References:

  9. Synthesis of Interfaces for Enforcement of Opacity Security Properties [Yi-Chin Wu].

    Opacity is a confidentiality property that arises in the analysis of security properties in networked systems. It characterizes whether a "secret" of the system can be inferred by an intruder who knows the system model and partially observes the system behavior. The secret could for instance be a set of system states that should not be revealed to the intruder. The system is said to be opaque if whenever the secret has occurred (e.g., the system has reached a secret state), there exists another "non-secret" behavior that is observationally equivalent to the intruder.

    When opacity is violated in a given system, it is desirable to apply a control mechanism that enforces the property. One control mechanism is to design an interface that inserts fictitious events to the system output at runtime such that the new modified output never reveals the secret. You can think of the interface as an add-on for some existing software application. In this project, we will first investigate how to characterize and verify opacity for a given system modeled as a finite-state automaton. Then, the project will focus on the synthesis of an opacity-enforcing interface. That is, given a system for which opacity verification returns false, synthesize an interface that enforces opacity. We will investigate how to solve for a valid insertion function using a symbolic approach. See the work by Wu and Lafortune for the specifications for interfaces, and the synthesis method that is based on explicitly constructing the state space.

    References:


  10. Synthesizing an elevator controller from scenarios and requirements [Tripakis].

    Imagine you had to design the controller for an elevator in a building. The controller controls one or more cars servicing multiple floors in the building. Instead of designing such a controller by hand, this project seeks to use computer-aided synthesis techniques for that purpose. In particular, we will use techniques developed recently for synthesis from scenarios and requirements, in the context of NSF project ExCAPE. These techniques combine high-level requirements (say in temporal logic) together with input scenarios to make the synthesis problem tractable.

    Reference:

    • Alur et al., Synthesizing Finite-state Protocols from Scenarios and Requirements, in Haifa Verification Conference 2014 (to appear). Pre-print.

  11. Parallel Co-Simulation of Cyber-Physical Systems [Tripakis].

    FMI (Functional Mockup Interface) is an evolving standard for composing model components designed using distinct modeling tools. FMI defines a standard API which model components (called FMUs) must implement. Then, a tool (e.g., Simulink) can import and execute FMUs generated from models designed in another tool (e.g., Modelica). In particular, this standard enables co-simulation of models coming from different tools (e.g., one FMU from Simulink, another from Modelica, a third from Rhapsody, etc.). A recent paper (2nd ref. tech. report below, paper at EMSOFT 2013) has proposed centralized co-simulation algorithms, where the algorithm is a sequential program. The goal of this project is to study parallel co-simulation techniques for distributed computing platforms or multicore machines. The idea is that each FMU runs on a separate computer, as independently from the other FMUs as possible, and synchronizes with the co-simulation algorithm only when necessary (e.g., for input/output exchange). Handling multiple time-scales (some FMUs being updated continuously, others only at relatively rare events) will be key to achieve good parallel performance.

    References:

  12. Modeling and analyzing Google Spanner [Eidson].

    Spanner is Google's globally distributed database. See recent paper (OSDI 2012) for more details. In this project you will identify relevant aspects of Spanner that you could model, and corresponding questions to be answered (correctness, performance, ...). You will choose one or more formalisms and concrete languages/tools to model these aspects and analyze the properties.

  13. Resource-Aware Verification [Tripakis].

    Most model-checking algorithms are based on exploring the reachable state-space of a model, which requires storing that state-space in some form. This results in state-explosion when the state-space becomes too big to store. The goal of this project is to study resource-aware verification methods, where the model-checking algorithm behaves differently according to the available computing resources (memory, and available time). For example, the algorithm may decide to delete part of the visited state-space, to free up memory when the latter is running out.

    References:

  14. Retiming for discrete-event models [Derler].
  15. As Edward Lee argued, to program cyber-physical systems we need languages where time is a first-class citizen. The Ptides model aims to address this need. Ptides is based on a timed discrete-event formalism where actors communicate by exchanging timestamped messages. The distributed implementation of Ptides raises a set of interesting problems similar to retiming of digital circuits. Examples of such problems are described in the 2nd reference below. The goal of this project is to develop retiming algorithms for Ptides.

    References:

  16. Scalable Formal Verification of Booleanized Analog Systems [Karthik Aadithya, Jaijeet Roychowdhury].
  17. The ABCD (Accurate Booleanization of Continuous Systems) project has developed algorithms to approximate analog systems using purely Boolean representations. These Boolean approximations can, in principle, be formally verified by tools such as ABC. However, the structure of the Boolean systems created by ABCD tends to challenge existing verification tools and techniques, which were developed and optimized for digital circuit applications. This project will explore why established verification tools tend to be ineffective for ABCD-generated models and if possible, come up with and demonstrate novel verification approaches that are more effective.

    References:

  18. Abstracting models of neuronal spiking [Tianshi Wang, Karthik Aadithya, Jaijeet Roychowdhury].
  19. Neurons feature an important phenomenon known as spiking that can be reproduced in continuous-time dynamical models such as Hodgkin-Huxley and Fitzhugh-Nagumo, which use systems of nonlinear differential equations. For scalability reasons, it is desirable to use simpler models (eg, hybrid ones involving simple LTI systems coupled with small Boolean state machines, such as the integrate-and-fire model) that reproduce the spiking behaviour of the more accurate continuous-time models across a range of inputs and parameters. This project will explore this question and attempt to devise and demonstrate algorithmic techniques for finding good hybrid abstractions that are derived from, and consistent with, underlying continuous-time models of spiking neurons.

  20. Grammatical inference techniques for controller synthesis [Tripakis, Seshia].

    Computer-aided synthesis deals with automatically generating (synthesizing) controllers, programs, or other implementation artifacts (the how) from high-level specifications such as temporal logic formulas (the what). A synthesis technique was proposed recently [HVC paper below] which takes as input, in addition to the traditional formal requirements (e.g., LTL formulas), a set of example scenarios. The goal of this project is to investigate whether grammatical inference techniques (automata learning) can be used to solve this type of synthesis problem.

    References:

    • R. Alur, M. Martin, M. Raghothaman, C. Stergiou, S. Tripakis, and A. Udupa. Synthesizing Finite-state Protocols from Scenarios and Requirements, HVC 2014 (to appear). Pre-print
    • de la Higuera, Colin (2010). Grammatical Inference: Learning Automata and Grammars. Cambridge: Cambridge University Press.
  21. Learning timed systems [Seshia, Tripakis].

    Automatically deriving a model from examples has numerous applications, e.g., mining requirements [HSCC 2013 paper below]. The goal of this project is to develop algorithms for learning timed models such as timed automata. Previous techniques can only learn a restricted class of such models [TCS 2010 paper below]. Prof. Seshia has some new ideas to try out!

    References:

  22. Extending an automated grading system for partial credit [Seshia, Tripakis].

    A method for automatically grading embedded robotic controllers was proposed recently [EMSOFT 2014 paper below]. The auto-grader is based on a notion of constrained parameterized tests based on Signal Temporal Logic (STL) that capture symptoms pointing to success or causes of failure in traces obtained from a realistic simulator. The method cannot distinguish in a satisfactory way between controllers that almost meet the goals (and therefore should get partial credit) from those that do not meet the goal at all (and therefore should get no credit). The goal of this project is to devise methods that allow to distinguish such cases and correctly and fairly assign partial credit.

    References:




Guidelines for Project Presentations:
  • Time your presentation for 15 minutes. It's very important to finish on time. Practice your presentation several times.
  • Topics to cover:
    1. Overview of the project and your approach
    2. Main technical challenges -- in algorithm design or implementation -- and how you tackled them. Focus on the top 3, at most one slide on each. Include any cool algorithms or insights that you came up with. Highlight what is new in comparison with related work in the literature.
    3. Hardware/software platform and tools used
    4. Demo (optional) -- could be interleaved with the presentation or performed right after
    5. Finish with a 1-slide summary: what did you achieve, what more can be done?

Guidelines for Project Reports:

  • One report per team. Deadline: Monday, Dec 16 at 12:05pm (noon). Submit on bSpace.
  • Recommended length: 4-6 pages, 11 pt font
  • Topics to cover:
    1. Introduction & Problem Definition
    2. Outline of your Approach and how it compares to related work
    3. Algorithms and Models Devised
    4. Major Technical Challenges in the Design/Implementation and how you tackled them
    5. Summary of Results -- what worked, what didn't, why, ideas for future work
    6. A one-paragraph narrative on the roles each team member played in the project
    7. How the project applied one or more of the course topics.
    8. Feedback: List of topics covered in class that came in handy; any topics that weren't covered that would have been useful
  • Illustrate algorithms and models with diagrams, and results with graphs, screenshots, pictures of your hardware, etc.
  • Keep the report short and precise; avoid lengthy and verbose descriptions!

Last modified: Sep 5, 2014
Contact 
©2002-2018 U.C. Regents