Forum
Presentations from Previous Semesters
Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article

Fall 2012 Seminars
David Broman, 11 Jan 2013
Last updated: 4 Sep 2013

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Fall 2012

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays from 4.10-5.00 p.m. in the DOP Center Classroom, 540 Cory Hall or in the open area in the DOP Center.

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact David Broman.

Past seminars of this semester are available here. Seminars from previous semesters can be found here. RSS Feed Icon iCal Feed Icon

Past Seminars


Fast-Lipschitz Optimization

Sep 11, 2012, 4.10-5pm, Carlo Fischione, KTH Royal Institute of Technology, Sweden

[Slides]

Abstract: In many optimization problems, decision variables must be computed by algorithms that need to be fast, simple, and robust to errors and noises, both in a centralized and in a distributed set-up. This occurs, for example, in contract based design, sensors networks, smart grids, water distribution, and vehicular networks. In this seminar, a new simple optimization theory, named Fast-Lipschitz optimization, is presented for a novel class of both convex and non-convex scalar and multi-objective optimization problems that are pervasive in the systems mentioned above. Fast-Lipschitz optimization can be applied to both centralized and distributed optimization. Fast-Lipchitz optimization solvers exhibit a low computational and communication complexity when compared to existing solution methods. In particular, compared to traditional Lagrangian methods, which often converge linearly, the convergence time of centralized Fast-Lipschitz algorithms is superlinear. Distributed Fast-Lipschitz algorithms converge fast, as opposed to traditional Lagrangian decomposition and parallelization methods, which generally converge slowly and at the price of many message passings among the nodes. In both cases, the computational complexity is much lower than traditional Lagrangian methods. Fast-Lipschitz optimization is then illustrated by distributed estimation and detection applications in wireless sensor networks.

Bio: Dr. Carlo Fischione is a tenured Associate Professor at KTH Royal Institute of Technology, Electrical Engineering and ACCESS Linnaeus Center, Automatic Control Lab, Stockholm, Sweden. He received the Ph.D. degree in Electrical and Information Engineering in May 2005 from University of L'Aquila, Italy, and the Dr.Eng. degree in Electronic Engineering (Laurea, Summa cum Laude, 5/5 years) in April 2001 from the same University. He held research positions at University of California at Berkeley, Berkeley, CA (2004-2005, Visiting Scholar, and 2007-2008, Research Associate) and Royal Institute of Technology, Stockholm, Sweden (2005-2007, Research Associate). His research interests include optimization and parallel computation with applications to wireless sensor networks, networked control systems, and wireless networks. He has co-authored over 80 publications, including a book, book chapters, international journals and conferences, and an international patent. He received numerous awards, including the best paper award from the IEEE Transactions on Industrial Informatics of 2007, the best paper awards at the IEEE International Conference on Mobile Ad-hoc and Sensor System 05 and 09 (IEEE MASS 2005 and IEEE MASS 2009), the Best Business Idea award from VentureCup East Sweden, 2010, the "Ferdinando Filauro" award from University of L'Aquila, Italy, 2003, the "Higher Education" award from Abruzzo Region Government, Italy, 2004, and the Junior Research award from Swedish Research Council, 2007, the Silver Ear of Wheat award in history from the Municipality of Tornimparte, Italy, 2012. He has chaired or served as a technical member of program committees of several international conferences and is serving as referee for technical journals. Meanwhile, he also has offered his advice as a consultant to numerous technology companies such as Berkeley Wireless Sensor Network Lab, Ericsson Research, Synopsys, and United Technology Research Center. He is Member of IEEE (the Institute of Electrical and Electronic Engineers), SIAM (the Society of Industrial and Applied Mathematics), and Ordinary Member of DASP (the academy of history Deputazione Abruzzese di Storia Patria).


User interface modelling - Model-based UI design

Sep 18, 2012, 4.10-5pm, Hallvard Traetteberg, Norwegian Univ. of Science and Technology (NTNU), Trondheim, Norway

[Slides]

Abstract: User interface modelling is an established cross-disciplinary field, combining elements from Human-Computer Interaction (HCI) and Software Engineering (SE), Information Systems (IS). This talk will present a conceptual overview of the field based on a classification framework developed in my thesis. Important work will be discussed in the context of this framework. Some time will be devoted to my own dialog modelling language Diamodl, since it is (coincidentally) based on an actor model similar to Ptolemy's (which is why I'm here), and how I believe it can be combined in the context of internet-based systems.

Bio: Hallvard Traetteberg is an Associate Professor at the Norwegian Univ. of Science and Technology (NTNU) in Trondheim, Norway, with a PhD in Information Systems. His research interests are model driven engineering in general with a focus on user interface modelling and model-based user interface design. He has developed an dialog modelling language called Diamodl and have experience building both graphical and textual syntaxes for it, as well as a runtime, mostly based on Eclipse.


Enclosing Hybrid Behavior

Oct 17, 2012, 4.10-5pm, Walid Taha, Halmstad University, Sweden, and Rice University, USA
(Joint work with Michal Konecny, Jan Durac, and Aaron Ames)

[Slides] [Related paper]

Abstract: Rigorous simulation of hybrid systems relies critically on having a semantics that constructs enclosures. Edalat and Pattinson's work on the domain-theoretic semantics of hybrid systems almost provides what is needed, with two exceptions. First, domain-theoretic methods leave many operational concerns implicit. As a result, the feasibility of practical implementations is not obvious. For example, their semantics appears to rely on repeated interval splitting for state space variables. This can lead to exponential blow up in the cost of the computation. Second, common and even simple hybrid systems exhibit Zeno behaviors. Such behaviors are a practical impediment because they make simulators loop indefinitely. This is in part due to the fact that existing semantics for hybrid systems generally assume that the system is non-Zeno.
The feasibility of reasonable implementations is addressed by specifying the semantics algorithmically. We observe that the amount of interval splitting can be influenced by the representation of function enclosures. Parameterizing the semantics with respect to enclosure representation provides a precise specification of the functionality needed from them, and facilitates studying their performance characteristics. For example, we find that non-constant enclosure representations can alleviate the need for interval splitting on dependent variables.
We address the feasibility of dealing with Zeno systems by taking a fresh look at event detection and localization. The key insight is that computing enclosures for hybrid behaviors over intervals containing multiple events does not necessarily require separating these events in time, even when the number of events is unbounded. In contrast to current methods for dealing with Zeno behaviors, this semantics does not require reformulating the hybrid system model specifically to enable a transition to a post-Zeno state.
The new semantics does not sacrifice the key qualities of the original work, namely, convergence on separable systems.

Bio: Walid Taha is a Professor of Computer Science at Halmstad University. He is interested in the design, semantics, and implementation of programming and hardware description languages. His current research focus is on modeling, simulation, and verification of cyberphysical systems, and in particular the Acumen modeling language.
Taha is credited with developing the idea of multi-stage programming (or "staging" for short), and is the designer of several systems based on it, including MetaOCaml, ConCoqtion, Java Mint, and the Verilog Preprocessor. He contributed to several other programming languages innovations, including statically typed macros, tag elimination, tagless staged interpreters, event-driven functional reactive programming (E-FRP), the notion of exact software design, and gradual typing. Broadly construed, his research interests include cyberphysical systems, software engineering, programming languages, and domain-specific languages.
Taha was the principal investigator on a number of research awards and contracts from the National Science Foundation (NSF), Semi-conductor Research Consortium (SRC), and Texas Advanced Technology Program (ATP). He received an NSF CAREER award to develop Java Mint. He founded the ACM Conference on Generative Programming and Component Engineering (GPCE), the IFIP Working Group on Program Generation (WG 2.11), and the Middle Earth Programming Languages Seminar (MEPLS). Taha chaired the 2009 IFIP Working Conference on Domain Specific Languages.
According to Google Scholar, Taha's publications had over 2,400 citations and an h-index of 26. Prof. Taha holds an Adjunct Professor position at Rice University.


Time and Schedulability analysis of Stateflow models

Oct 23, 2012, 4.10-5pm, Marco Di Natale Scuola Superiore Sant'Anna of Pisa, Italy.

[Slides]

Abstract: Model-based design of embedded systems using Synchronous Reactive (SR) models is among the best practices for software development in the automotive and aeronautics industry. The correct implementation of an SR model must guarantee the synchronous assumption, that is, all the system reactions complete before the next event. This assumption can be verified using schedulability analysis, but the analysis can be quite challenging when the system also consists of blocks implementing finite state machines, as in modern modeling tools like Simulink and SCADE.
In this talk, we discuss the schedulability analysis of such systems, including the applicability of traditional task analysis methods and an algorithmic solution to compute the exact demand and request bound functions.
In addition, we define conditions for computing these functions using a periodic recurrent term, even when there is no cyclic recurrent behavior in the model.

Bio: Prof. Marco Di Natale is IEEE Senior member and Associate Professor at the Scuola Superiore Sant'Anna of Pisa, Italy, where he was Director of the Real-Time Systems (ReTiS) Lab. He received his PhD from Scuola Superiore Sant'Anna and was a visiting Researcher at the University of California, Berkeley in 2006 and 2008-2009, principal investigator for architecture exploration and selection at General Motors R&D in 2006 and 2007 and is currently visiting fellow for United Technologies Research. He's been a researcher in real-time and embedded systems for more than 15 years, author of more than 130 papers, winner of five best paper awards and two presentation awards. He is also member of the editorial board of the IEEE Transactions on Industrial Informatics and chair for the embedded systems track of the IEEE Industrial Electronics Society.


Beyond the Hill of Multicores lies the Valley of Accelerators

Oct 30, 2012, 4.10-5pm, Aviral Shrivastava, Arizona State University, USA

[Slides]

Abstract: The power wall has resulted in a sharp turn in processor designs, and they irrevocably went multi-core. Multi-cores are good because they promise higher potential throughput (and never mind the actual performance of your applications). This is because the cores can be made simpler and run at lower voltage resulting in much more power-efficient operation. Even though the performance of single-core is much reduced, the total possible throughput of the system scales with the number of cores. However, the excitement of multi-core architectures will only last so long. This is not only because the benefits of voltage scaling will reduce with decreasing voltage, but also because after some point, making a core simpler will only be detrimental and may actually increase power-efficiency. What next! How do we further improve power-efficiency?
Beyond the hill of multi-cores, lies the valley of accelerators. Accelerators: hardware accelerators (e.g., Intel SSE), software accelerators (e.g., VLIW accelerators), reconfigurable accelerators (e.g., FPGAs), programmable accelerators (CGRAs) are some of the foreseeable solutions that can further improve power-efficiency of computation. Among these, we find CGRAs, or Coarse Grain Reconfigurable Arrays a very promising technology. They are slightly reconfigurable (and therefore close to hardware), but are programmable (therefore usable as more general-purpose accelerators). As a result, they can provide power-efficiencies of up to 100 GOps/W, while being relatively general purpose. Although very promising, several challenges remain in compilation for CGRAs, especially because they have very little dynamism in the architecture, and almost everything (including control) is statically determined. In this talk, I will talk about our recent research in developing compiler technology to enable CGRAs as general-purpose accelerators.

Bio: Prof. Aviral Shrivastava is Associate Professor in the School of Computing Informatics and Decision Systems Engineering at the Arizona State University, where he has established and heads the Compiler and Microarchitecture Labs (CML) (http://aviral.lab.asu.edu/). He received his Ph.D. and Masters in Information and Computer Science from University of California, Irvine, and bachelors in Computer Science and Engineering from Indian Institute of Technology, Delhi. He is a 2011 NSF CAREER Award Recipient, and recipient of 2012 Outstanding Junior Researcher in CSE at ASU.
His research lies at the intersection of compilers and architectures of embedded and multi-core systems, with the goal of improving power, performance, temperature, energy, reliability and robustness. His research is funded by NSF and several industries including Microsoft, Raytheon Missile Systems, Intel, Nvidia, etc. He serves on organizing and program committees of several premier embedded system conferences, including ISLPED, CODES+ISSS, CASES and LCTES, and regularly serves on NSF and DOE review panels. Right now, he is a visiting faculty in the EECS department at University of California, Berkeley.


Closing the loop with Medical Cyber-Physical Systems

Nov 2, 2012, 2.10-3pm, Rahul Mangharam, University of Pennsylvania, USA

[Slides]

Abstract: The design of bug-free and safe medical device software is challenging, especially in complex implantable devices that control and actuate organs whose response is not fully understood. Safety recalls of pacemakers and implantable cardioverter defibrillators between 1990 and 2000 affected over 600,000 devices. Of these, 200,000 or 41%, were due to firmware issues (i.e. software) that continue to increase in frequency. There is currently no formal methodology or open experimental platform to test and verify the correct operation of medical device software within the closed-loop context of the patient. IN this talk I will describe our efforts to develop the foundations of modeling, synthesis and development of verified medical device software and systems from verified closed-loop models of the device and organs. The research spans both implantable medical devices such as cardiac pacemakers and physiological control systems such as drug infusion pumps which have multiple networked medical systems. In both cases, the devices are physically connected to the body and exert direct control over the physiology and safety of the patient. With the goal to develop a tool-chain for certifiable software for medical devices, I will walk through (a) formal modeling of the heart and pacemaker in timed automata, (b) verification of the closed-loop system, (c) automatic model translation from UPPAAL to Stateflow for simulation-based testing, and (d) automatic code generation for platform-level testing of the heart and real pacemakers.
If time permits, I will describe our investigations in distributed wireless control networks and green scheduling for energy-efficient building automation.

Bio: Rahul Mangharam is the Stephen J Angello Chair and Assistant Professor in the Dept. of Electrical & Systems Engineering and Dept. of Computer & Information Science at the University of Pennsylvania. He directs the Real-Time and Embedded Systems Lab at Penn. His interests are in real-time scheduling algorithms for networked embedded systems with applications in energy-efficient buildings, automotive systems, medical devices and industrial control networks. His group has won several awards in IPSN 2012, RTAS 2102, World Embedded Programming Competition 2010, Honeywell Industrial Wireless Award 2011, Google Zeitgeist Award 2011, Intel Innovators Award 2012, Intel Early Faculty Honor 2012, NAE US Frontiers 2012, Accenture Innovation Jockeys 2012, etc.
He received his Ph.D. in Electrical & Computer Engineering from Carnegie Mellon University where he also received his MS and BS. In 2002, he was a member of technical staff in the Ultra-Wide Band Wireless Group at Intel Labs. He was an international scholar in the Wireless Systems Group at IMEC, Belgium in 2003. He has worked on ASIC chip design at FORE Systems (1999) and Gigabit Ethernet at Apple Computer Inc. (2000). mLAB - http://mlab.seas.upenn.edu


Computing without Processors

Nov 13, 2012, 4.10-5pm, Satnam Singh, Google, USA

Abstract: The duopoly of computing has up until now been delimited by drawing a line in the sand that defines the instruction set architecture as the hard division between software and hardware. On one side of this contract Intel improved the design of processors and on the other side of this line Microsoft developed ever more sophisticated software. This cozy relationship is now over as the distinction between hardware and software is blurred due to relentless pressure for performance and reduction in latency and energy consumption. Increasingly we will be forced to compute with architectures and machines which do not resemble regular processors with a fixed memory hierarchy based on heuristic caching schemes. Other ways to bake all that sand will include the evolution of GPUs and FPGAs to form heterogeneous computing resources which are much better suited to meeting our computing needs than racks of multicore processors. This presentation will highlight some of the programming challenges we face when trying to develop for heterogeneous architectures and a few promising lines of attack are identified.

Bio: Prof. Singh works in the Technical Infrastructure division of Google in Mountain View, California and focuses on the configuration management of Google's data-center services. Previously Prof. Singh worked on the design of heterogeneous systems at Microsoft Research in Cambridge UK and on parallel programming techniques at Microsoft's Developer Division in Redmond USA. He has also worked on re-configurable computing and formal verification at Xilinx in San Jose, California and as an academic at the University of Glasgow. He also currently holds a part-time position as the Chair of Reconfigurable Systems at the University of Birmingham.


T-CREST: Time-predictable Multi-Core Architecture for Embedded Systems

Nov 16, 2012, 2.10-3pm, Martin Schoeberl, Technical University of Denmark

[Slides]

Abstract: The T-CREST project is developing a time-predictable system that will simplify the safety argument with respect to maximum execution time while striving to increase the performance with multicore processors. T-CREST looks at time-predictable solutions for processors, the memory hierarchy, the on-chip interconnect, and the compiler. T-CREST is a 3 year project, funded by the EC. It has just passed the first year. In this talk I will give an overview of the T-CREST project, the individual sub-projects, and present some early results on the on-chip interconnect and the processor research.

Bio: Martin Schoeberl is associate professor at the Technical University of Denmark, at the Department of Informatics and Mathematical Modelling. He completed his PhD at the Vienna University of Technology in 2005 and received the Habilitation in 2010. Martin Schoeberl's research focus is on time-predictable computer architectures and on Java for hard real-time systems. During his PhD studies he developed the time-predictable Java processor JOP, which is now in use in academia and in industrial projects. His research on time-predictable computer architectures is currently embedded in the EC funded project T-CREST.


Synchronous Control and State Machines in Modelica

Nov 27, 2012, 4.10-5pm, Hilding Elmqvist, Dassault Systemes AB, Sweden

[Slides]

Abstract: The scope of Modelica has been extended from a language primarily intended for physical systems modeling to modeling of complete systems by allowing the modeling of control systems and by enabling automatic code generation for embedded systems. Much focus has been given to safe constructs and intuitive and well-defined semantics.
The presentation will describe the fundamental synchronous language primitives introduced for increased correctness of control systems implementation. The approach is based on associating clocks to the variable types. Special operators are needed when accessing variables of another clock. This enables clock inference and increased correctness of the code since many more checks can be done during translation. Furthermore, the sampling period of a clocked partition needs to be defined only at one place (either in absolute time or relatively to other clocked partitions). The principles of partitioning a system model into different clocks (continuous, periodic, non-periodic, multi-rate) will be explained. The new language elements follow the synchronous approach. They are based on the clock calculus and inference system of Lucid Synchrone. However, the Modelica approach also uses multi-rate periodic clocks based on rational arithmetic and also non-periodic and event based clocks are supported.
Parallel and hierarchical state machines will be introduced including submodels within states. The supporting Modelica library will also be introduced.

Bio: Elmqvist's Ph.D. thesis from the Department of Automatic Control, Lund Institute of Technology contains the design of a novel object-oriented and equation based modelling language, Dymola, and algorithms for symbolic model manipulation.
In 1992, Elmqvist founded Dynasim AB and in 1996 he took the initiative to organize the international effort to design the next generation object-oriented language for physical modelling, Modelica. Elmqvist is the chief architect of the Multi-Engineering Modelling and Simulation software used in Dymola Product Line and CATIA Systems DBM and responsible for Technology within the board of Modelica Association.


Sensor fusion in dynamical systems - applications and research challenges

Dec 11, 2012, 4.10-5pm, Thomas Schon, Linkoping University, Sweden.

[Slides]

Abstract: Sensor fusion refers to the problem of computing state estimates using measurements from several different, often complementary, sensors. The strategy is explained and (perhaps more importantly) illustrated using four different industrial/research applications, very briefly introduced below. Guided partly by these applications we will highlight key directions for future research within the area of sensor fusion. Given that the number of available sensors is skyrocketing this technology is likely to become even more important in the future. The four applications are; 1. Real-time pose estimation and autonomous landing of the helicopter (using inertial sensors and a camera). 2. Pose estimation of a helicopter using an already existing map (a processed version of an aerial photograph of the operational area), inertial sensors and a camera. 3. Vehicle motion and road surface estimation (using inertial sensors, steering wheel sensor and an infrared camera). 4. Indoor pose estimation of a human body (using inertial sensors and ultra-wideband).

Bio: Thomas B. Schon is an Associate Professor with the Division of Automatic Control at Linkoping University (Linkoping, Sweden). He received the BSc degree in Business Administration and Economics in Jan. 2001, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001 and the PhD degree in Automatic Control in Feb. 2006, all from Linkoping University. He has held visiting positions with the University of Cambridge (UK) and the University of Newcastle (Australia). He is a Senior member of the IEEE. He received the best teacher award at the Institute of Technology, Linkoping University in 2009. Schon's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods. He is active within the fields of machine learning, signal processing and automatic control. He pursue both basic research and applied research, where the latter is typically carried out in collaboration with industry. More information about his research can be found from his home page: users.isy.liu.se/rt/schon



Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article
You are not logged in 
Contact 
©2002-2017 U.C. Regents