Forum
Presentations from Previous Semesters
Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article

Spring 2011 seminars
Patricia Derler, 13 Sep 2011
Last updated: 13 Sep 2011

Spring 2011 seminars

Didier Dubois, Université Paul Sabatier, Toulouse, Thursday, May 19th, 2011
Time and Location: 4-5pm (540 Cory)

A unified view of uncertainty theories

The notion of uncertainty has been a controversial issue for a long time. In particular the prominence of probability theory in the scientific arena has blurred some distinctions that were present from its inception, namely between uncertainty due to the variability of physical phenomena, and uncertainty due to a lack of information. The Bayesian school claims that whatever its origin, uncertainty can be modeled by single probability distributions. This assumption has been questioned in the last thirty years or so. Indeed the use of unique distributions so as to account for incomplete information leads to paradoxical uses of probability theory. In the area of risk analysis, especially concerning environmental matters, it is crucial to account for variability and incomplete information separately, even if conjointly, in uncertainty propagation techniques. Indeed, it should be clear at the decision level what is the part of uncertainty due to partial ignorance (hence reducible by collecting more information) from uncertainty due to variability (to be faced with concrete actions). New uncertainty theories have emerged, which have the potential to meet this challenge, where the unique distribution is replaced by a convex set of probabilities, this set being all the larger as less information is present. Special cases of such representations, which enable efficient calculation methods, are based on random sets and possibility theory (using fuzzy sets of possible values). The aim of this talk is to trigger interest in these new approaches, by explaining their epistemology and illustrating them on some applications.


Christine Morin, INRIA, Thursday, May 12th, 2011
Time and Location: 4-5pm (540 Cory)

Building Large Scale Dynamic Computing Infrastructures over Clouds

The popularity of virtualization has paved the way for the advent of the cloud computing model. In cloud computing infrastructures, providers make use of virtualization technologies to offer flexible, on-demand provisioning of resources to customers. Combining both public and private infrastructures creates so-called hybrid clouds, allowing companies and institutions to manage their computing infrastructures in flexible ways and to dynamically take advantage of externally provided resources. Considering the growing needs for large computation power and the availability of a growing number of clouds distributed over the Internet and across the globe, our work focuses on two principal objectives: 1) leveraging virtualization and multiple cloud computing infrastructures to build distributed large scale computing platforms, 2) developing mechanisms to make these infrastructures more dynamic – thereby offering new ways to exploit the inherent dynamic nature of distributed clouds. First, we present how we build large scale computing infrastructures by harnessing resources from multiple distributed clouds. Then, we describe the different mechanisms we developed to allow efficient inter-cloud live migration, which is a major building block for taking advantage of dynamicity in distributed clouds.


João Barros, Universidade do Porto, Portugal, Tuesday, May 10th, 2011
Time and Location: 4-5pm (540 Cory)

Vehicular Networks: System Modeling and Real-World Deployment

Vehicle to vehicle (V2V) communication is emerging as the paradigm of choice for a number of traffic safety, traffic management, assisted driving and infotainment services. The design of protocols and applications based on V2V platforms is largely dependent on the availability of realistic models for key system aspects, in particular for the mobility patterns of the vehicles and the characteristics of V2V wireless channels. To meet these challenges, we start by combining large-scale traffic simulation with advanced network analysis in order to characterize fundamental graphtheoretic parameters. A comparison between our results and popular mobility models, such as the random waypoint model and the Manhattan mobility model, reveals striking differences in the obtained connectivity profiles, thus casting some doubt on the adequacy of simple mobility models. Addressing the implications of the wireless channel, we consider the interplay between single-hop channel models and large-scale network connectivity. By progressively increasing the sophistication of the wireless link while evaluating the resulting connectivity profiles, we are able to show that the large-scale node degree behavior of a complex shadow fading environment is well approximated by a simpler and more tractable unit-disk model.

We further observe that existing research in vehicular ad-hoc networks typically disregards the effect of vehicles as physical obstructions to wireless propagation. Using two cars equipped with the new standard V2V communication, Dedicated Short Range Communications (DSRC) or IEEE 802.11p, we performed extensive experimental measurements in order to collect received signal power and packet delivery ratio information in several relevant scenarios: parking lot, open space, highway, suburban and urban canyon. Upon separating the data into line of sight (LOS) and non-line of sight (NLOS) categories, our results show that obstructing vehicles cause significant impact on the channel quality with relevant implications on the design of upper layer protocols. A single obstacle can cause a drop of over 20 dB in received signal strength when two cars communicate at a distance of 10 m. This motivates us to introduce a new V2V model, in which vehicles are accounted for as physical three-dimensional obstacles that affect LOS and induce significant attenuation and packet loss. The algorithm behind the proposed model allows for computationally efficient implementation in VANET simulators, thus adding to the realism of supporting tools for protocol design.

We shall conclude the talk by elaborating on the real-world deployment of a vehicular ad-hoc network with 500 nodes currently under way in the city of Porto, Portugal.

Joint work with Mate Boban, Rui Meireles, Hugo Conceição, Tiago Vinhoza, Michel Ferreira (U. Porto), Ozan Tonguz (CMU), Peter Steenkiste (CMU) and Susana Sargento (U. Aveiro).


Jia Zou, UC Berkeley, Thursday, May 5th, 2011
Time and Location: 4-5pm (540 Cory)

From Ptides to PtidyOS, Programming Distributed Real-Time Systems

Real-time systems are those whose correctness depend not only on logical operations but also on timing delays in response to environment triggers. Thus programs that implement these systems must satisfy constraints on response time. However, most of these systems today are designed using abstractions that do not capture timing properties. For example, a programming language such as C does not provide constructs that specify how long computation takes. Instead, system timing properties are inferred from low-level hardware details. This effectively means conventional programming languages fail as a proper abstraction for real-time systems. To tackle this problem, a programming model called "Ptides" was first introduced by Yang Zhao. Ptides builds on a solid foundation in discrete-event (DE) model of computation. By leveraging the temporal semantics of DE, Ptides captures both the functional and timing aspects of the system. This thesis extends prior work by providing a set of execution strategies that make efficient use of computation resources and guarantees deterministic functional and timing behaviors. A complete design flow based on these strategies is then presented.

Our workflow starts with a programming environment where a distributed real-time application is expressed as a Ptides model. The model captures both the logical operations of the system and the desired timing of interactions with the environment. The Ptides simulator supports simulation of both of these aspects. If execution times are available, this information can be annotated as a part of the model to show whether desired timing can be achieved in that implementation. Once satisfied with the design, a code generator can be used to glue together the application code with a real-time operating system called PtidyOS. To ensure the responsiveness of the real-time program, PtidyOS's scheduler combines Ptides semantics with earliest-deadline-first (EDF). To minimize scheduling overhead associated with context switching, PtidyOS uses a single stack frame for event execution, while still enables event preemptions. The first prototype for PtidyOS is implemented on a Luminary microcontroller. We demonstrate the Ptides workflow through a motion control application.


Rob Wood, Harvard University, Thursday, April 28st, 2011
Time and Location: 4-5pm (521 Cory)

Progress in Insect-Scale Robots

We seek to elucidate how to apply biological principles to the creation of robust, agile, inexpensive robotic insects. However, biological inspiration alone is not sufficient to create robots that mimic the agile locomotion of their arthropod analogs. This is particularly true as the characteristic size of the robot is decreased: to create high performance robotic insects, we must explore novel manufacturing paradigms, develop a greater understanding of complex fluid-structure interactions for flapping-wings, generate high efficiency power and control electronics, create new forms of actuation and sensing, and explore alternative control strategies for computation-limited systems. This talk will describe challenges for the creation of robotic insects and the state of the art for each of these topics. I will also give an overview of the topics we are addressing in the NSF Expeditions in Computing 'RoboBees' project.


Michel Beaudouin-Lafon, University of Paris-Sud, Thursday, April 21st, 2011
Time and Location: 4-5pm (521 Cory)

Lessons from the WILD Room, an Interactive Multi-Surface Environment

Creating the next generation of interactive systems requires experimental platforms that let us explore novel forms of interaction in real settings. The WILD room (Wall-size interaction with large data sets, http://www.lri.fr/~mbl/WILD) is a high-performance interactive visualization environment for exploring the notion of multi-surface interaction. WILD combines an ultra-high resolution wall display, a multitouch table, a motion tracking system and various mobile devices. Our target users are scientists who are confronted with massive amounts of data, including astrophysicists, biologists and neuroscientists. I will describe the design trade-offs and lessons learned during the development of this platform with respect to hardware, interaction, software engineering, and participatory design of applications.


Martin Törngren, KTH, Wednesday, April 20th, 2011
Time and Location: 2-3pm (521 Cory)

Systematic and cost-efficient tool-chains for embedded systems - Trends, research efforts and challenges

The typical embedded systems development environment today consists of many specialized development tools, forming a complex tool landscape with partial integration. The problem picture spans needs for maturing and evolving domain practices (for example immature requirements engineering), cross-domain information management capabilities (for example traceability and configuration management), as well as model-based design (for example formal verification and synthesis capabilities integrated with design models). The iFEST ARTEMIS research project aims to remedy the highly fragmented nature of incompatible tools for embedded systems, with particular emphasis on HW/SW codesign and lifecycle aspects. iFEST is developing a tool integration framework, its realization as part of a number of platforms and tool-chains, and will evaluate the capabilities of the tool-chains in real industrial development settings. Standardization of the framework is an explicit goal for iFEST. We discuss the challenges facing tool integration and the iFEST approach to overcome them.


Jérôme Hugues, ISAE, Toulouse, Tuesday, April 19th, 2011
Time and Location: 4-5pm (540 Cory)

AADL for Cyber-Physical Systems: Semantics and beyond, validate what's next

The SAE Architecture Analysis and Design Language is a design-by-committee standard promoted to help the space and avionics domain. It now extends to a much broader audience, and this language is used in many domains related to Cyber-Physical Systems. AADL is an ADL promoted in the context of Model-Driven Engineering which has now gained a significant momentum in the industry. Models are a valuable asset that should be used and preserved down to the construction of the final system; modeling time and effort should be reduced to focus directly on the system and its realization. Yet, validation&verification may require many different analysis models, involving a strong theoretical background to be mastered. The SAE AADL has been defined to match the concepts understood by any engineer (interface, software or hardware components, packages, generics). From these concepts, typical behavior elements (scheduling and dispatch, communication mechanisms) have been added using both formal and informal description, always bound to theoretical frameworks for V&V. In parallel, the AADL allows one to attach user-defined properties or languages for specific analysis. This enables the application of many different techniques for the analysis of AADL models, among which schedulability, safety, security, fault-propagation, model-checking, resource dimensioning, etc.; but also code generation.

In this talk, we give an overview of the AADL, and discuss how to use its features to analyse in depth a CPS case study.


Herbert Tanner, University of Delaware, Friday April 15th, 2011
Time and Location: 3-4pm (540 Cory)

From Individual to Cooperative Robotic Behavior

Under this general title we discuss different planning and control problems in robotic systems, at progressively more abstract modeling levels. We start with individual and cooperative robot navigation, move on to mobile sensor network coordination, and conclude with behavior planning in interacting robotic hybrid systems. Although the technical approaches and tools at each abstraction level are different, the overarching idea is that these methods can be made to converge into a "correct-by-design" framework for cooperative control, from conceptual high level planning to low level implementation.


Kanna Rajan, MBARI, Tuesday, April 5th, 2011
Time and Location: 4-5pm (540 Cory)

Autonomy from Outer to Inner Space: Automated Inference for Sampling and Control for Marine Robotics

Ocean Sciences the world over is at a cusp, with a move from the Expeditionary to the Observatory mode of doing science. Recent policy decisions in the United States, are pushing the technology for persistent observation and sampling which hitherto had been either economically unrealistic or unrealizable due to technical constraints. With the advent of ocean observatories, a number of key technologies have however proven to be promising for sustained ocean presence. In this context robots will need to be contextually aware and respond rapidly to evolving phenomenon, especially in coastal waters due to the diversity of atmospheric, oceanographic and land-sea interactions not to mention the societal impact they have on coastal communities. They will need to respond by exhibiting scientific opportunism while being aware of their own limitations in the harsh oceanic environment. Current robotic platforms however have inherent limitations; pre-defined sequences of commands are used to determine what actions the robot will perform and when irrespective of the context. As a consequence not only can the robot not recover from unforeseen failure conditions, but they’re unable to significantly leverage their substantial onboard assets to enable scientific discovery.

To mitigate such shortcomings, we are developing deliberative techniques to dynamically command Autonomous Underwater Vehicles (AUV). Our effort is aimed to use a blend of generative and deliberative Artificial Intelligence Planning and Execution techniques to shed goals, introspectively analyze onboard resources and recover from failures. In addition we are working on Machine Learning techniques to adaptively trigger science instruments that will contextually sample the seas driven by scientific intent. The end goal is towards unstructured exploration of the subsea environments that are a rich trove of problems for autonomous systems. Our approach spans domains and not unduly specific to the ocean domain; the developed system is being used for a terrestrial personal robot at a Silicon Valley startup and is being tested on a Planetary rover testbed by the European Space Agency. Our work is a continuum of efforts from research at NASA to command deep space probes and Mars rovers, the lessons of which we have factored into the oceanic domain. In this talk I will articulate the challenges of working in this hostile underwater domain, lay out the differences and motivate our architecture for goal-driven autonomy on AUV’s.


Nora Ayanian, University of Pennsylvania, Tuesday, March 29th, 2011
Time and Location: 2:30-3:30pm (540 Cory)

Automatic Synthesis of Multirobot Feedback Control Policies

Using a group of robots in place of a single complex robot to accomplish a task has many benefits, including simplified system repair, less down time, and lower cost. Combining heterogeneous groups of these multi-robot systems allows addressing multiple subtasks in parallel, reducing the time it takes to address many problems, such as search and rescue, reconnaissance, and mine detection. These missions demand different roles for robots, necessitating a strategy for coordinated autonomy while respecting any constraints the environment may impose. I am interested in synthesizing controllers for heterogeneous multirobot systems, a problem that is particularly challenging because of inter-robot constraints such as communication maintenance and collision avoidance, the need to coordinate robots within groups, and the dynamics of individual robots.

I will present globally convergent feedback policies for navigating groups of heterogeneous robots in known constrained environments. Provably correct by construction, our approach automatically and concurrently solves both the planning and control problems by decomposing the space into cells and sequentially composing local feedback controllers. The approach is useful for navigation and for creating and maintaining formations while maintaining desired communication and avoiding collisions. I will also extend this methodology to large groups of robots by using abstractions to manage complexity. This provides a framework with which navigation of multiple groups in environments with obstacles is possible, and permits scaling to many groups of robots. Finally, we show that this automatic controller synthesis enables the design of feedback policies from user-specified high-level task specifications.


Manfred Broy, Technical University of Munich, Tuesday, March 29th, 2011
Time and Location: 4-5pm (540 Cory)

Multi-Functional Systems: Towards a Theory for Requirements Specification and Architecture Design

This lecture introduces a theory for the identification, modeling, and formalization of two complementary views onto software and software intensive systems called the problem view and the solution view. The problem view addresses requirements engineering for describing the overall system functionality from the users’ point of view aiming at the specification of the functional requirements of multi-functional systems in terms of their functions as well as their mutual dependencies. This view leads to a function or service hierarchy. The solution view essentially addresses the design phase to decompose systems into logical architectures formed by networks of interactive components specified by their interface behavior.

Both views are closely related and are helpful for the structured modeling of multi-functional systems during their development. We show how the two complementary views work and fit together as major milestones in the early phases of software and systems development. We, in particular, base our approach on the FOCUS theory for describing interface behavior and the structuring of systems into components. We give a theoretical treatment of both views by extending the FOCUS model and its interface theory accordingly.


Aaron Ames, Texas A&M, Thursday, March 17th, 2011
Time and Location: 4-5pm (521 Cory)

From Human Data to Bipedal Robotic Walking and Beyond

Humans have the amazing ability to walk with deceptive ease, navigating everything from daily environments to uneven and uncertain terrain with efficiency and robustness. If these same abilities can be imbued into robotic devices, the potential applications are far-reaching: from legged robots for space exploration to the next generation of prosthetic devices.

The purpose of this talk is to present the process of achieving human-like bipedal robotic walking by looking to humans, and specifically human walking data, to design formal models and controllers. The fundamental principle behind this process is that regardless of the complexity present in human walking—hundreds of degrees of freedom coupled with highly nonlinear dynamics and forcing—the essential information needed to understand walking is encoded in simple functions of the kinematics, or “outputs,” of the human, and this fundamental principle can be applied to obtain both models and controllers. At the level of models, we find that all humans display the same temporal ordering of events, or contact points, throughout a walking gait; this information uniquely determines a mathematical hybrid system model for a given bipedal robot. At the level of controllers, we find that humans display simple behavior for certain canonical “output” functions; by designing controllers that achieve the same output behavior in robots, we are able to achieve surprisingly human-like dynamic walking. The method used to achieve this walking allows for extensions beyond robotic walking; it can be used to quantify how “human-like” a walking gait is, and has potential applications to the design and simulation of controllers for prosthetic devices.


Emo Todorov, University of Washington, Tuesday, March 1st, 2011
Time and Location: 4-5pm (540 Cory)

Dynamic Intelligence through Online Optimization

Optimal control is appealing because one can in principle specify the task-level goals, and let numerical optimization figure out the movement details. In practice however such optimization is extremely challenging. Short of guessing the form of the solution and encoding the guess in terms of "features", the only way to get around the curse of dimensionality is to rely on local methods. These methods are most effective when initialized at the current state and applied online, as in Model Predictive Control or MPC. MPC has traditionally been applied to systems with slow and smooth dynamics such as chemical processes. Its applications in robotics are rare, because the requirement for real-time optimization is difficult to meet.

In this talk I will describe our ongoing efforts to make MPC a reality in robotics. These efforts fall in three categories: developing better optimization methods - especially methods that can deal with contact dynamics; developing a fast and accurate physics engine tailored to control rather than simulation; and applying the methodology to a range of simulated and real robots. One of these applications has been a hit on YouTube.

I will also discuss optimal control as a theory of sensorimotor function. This is a well-developed theory but the focus here will be new. It is remarkable that small nervous systems can generate complex, fast and accurate movements that are not inferior to the movements of primates in any obvious way. What then is the difference between large and small brains? Large brains obviously produce more behaviors, however that observation alone does not point to a qualitative difference in terms of neural function (and indeed most neuroscientists studying small brains would like to believe that there is no such difference). I will argue that there is a qualitative difference: large brains contain the optimization machinery which gives rise to new behaviors, while small brains have been optimized by evolution and do not have this machinery. Once optimization machinery is part of the system, it can be used not only for offline learning but also for online fine-tuning (in conjunction with an internal model, as in MPC). This dual use blurs the distinction between learning/planning and execution, in general agreement with recent physiological evidence which suggests that planning and execution are done by the same neural circuits.


David Broman, Linköping University, Thursday, February 17th, 2011
Time and Location: 4-5pm (540 Cory)

Modeling Kernel Language (MKL) - A formal and extensible approach to equation-based modeling languages

Performing computational experiments on mathematical models instead of building and testing physical prototypes can drastically reduce the development cost for complex systems such as automobiles, aircraft, and powerplants. In the past three decades, a category of equation-based modeling languages has appeared that is based on acausal and object-oriented modeling principles, enabling good reuse of models. Examples of these languages are Modelica, VHDL-AMS, and gPROMS. However, the modeling languages within this category have grown to be large and complex, where the specifications of the language's semantics are informally defined, typically described in natural languages. The lack of a formal semantics makes these languages hard to interpret unambiguously and to reason about. In this talk a new research language called the Modeling Kernel Language (MKL) is presented. By introducing the concept of higher-order acausal models (HOAMs), we show that it is possible to create expressive modeling libraries in a manner analogous to Modelica, but using a small and simple language concept. In contrast to the current state-of-the-art modeling languages, the semantics of how to use the models, including meta operations on models, are also specified in MKL libraries. This enables extensible formal executable specifications where important language features are expressed through libraries rather than by adding completely new language constructs. MKL is a statically typed language based on a typed lambda calculus. We define the core of the language formally using operational semantics and prove type safety. An MKL interpreter is implemented and verified in comparison with a Modelica environment.


Vijay Kumar, University of Pennsylvania, Friday February 11th, 2011
Time and Location: 3-4pm (540 Cory)

Autonomous 3-D Flight and Cooperative Control of Multiple Micro Aerial Vehicles

There are now tens of thousands of Unmanned Aerial Vehicles (UAVs) in operation but most UAVs are not autonomous. There are very few micro UAVs capable of navigating indoor or urban environments. I will describe our recent work on control and planning for multi-rotor micro UAVs in complex environments. I will also describe approaches to cooperative control enabling such tasks as persistent surveillance, cooperative manipulation and transport of large payloads.


Lothar Thiele, Swiss Federal Institute of Technology Zurich, Tuesday, February 8th, 2011
Time and Location: 4-5pm (540 Cory)

Embedded Multiprocessors - Performance Analysis and Design

During the system level design process of an embedded system, a designer is typically faced with questions such as whether the timing properties of a certain system design will meet the design requirements, what architectural element will act as a bottleneck, or what the on-chip memory requirements will be. Consequently it becomes one of the major challenges in the design process to analyze specific characteristics of a system design, such as end-to-end delays, buffer requirements, or throughput in an early design stage, to support making important design decisions before much time is invested in detailed implementations. This analysis is generally referred to as system level performance analysis. If the results of such an analysis are able to give guarantees on the overall system behavior, it can also be applied after the implementation phase in order to verify critical system properties.

One of the major requirements for models, methods and tools is their support for a modular, component-based design. This aspect covers as well the composition of the underlying hardware platform as well as the software design. Because of the import role of resource interaction, these components not only need to talk about functional properties but also about resource interaction.

The talk will cover the following aspects of system level performance analysis of distributed embedded systems:

  • Approaches to system-level performance analysis. Requirements in terms of accuracy, scalability, composability and modularity.
  • Modular Performance Analysis (MPA): basic principles, methods and tool support.
  • Examples that show the applicability: An environment to map applications onto multiprocessor platforms including specification, simulation, performance evaluation and mapping of distributed algorithms; analysis of memory access and I/O interaction on shared buses in multi-core systems; worst-case temperature analysis of embedded multiprocessor systems.

Dana Nau, University of Maryland, Thursday, January 13th, 2011
Time and Location: 4-5pm (540 Cory)

Building and Utilizing Predictive Models in Non-Zero-Sum Games

Suppose you are interacting with one or more agents who are unfamiliar to you. How can you decide the best way to behave? Suppose further that the environment is a "noisy" one in which agents' observations of each other's behavior are not always correct. How should this affect your behavior? I will discuss the above questions in the context of the Iterated Prisoner's Dilemma and several other non-zero-sum games. I'll describe algorithms that build predictive models of agents based on observations of their behavior. These models can be used to filter out noise, and to construct strategies that optimize expected utility. Experimental evaluations of these algorithms show that they work quite well. For example, DBS, an agent based on one of the algorithms, placed third out of 165 contestants in an international competition of the Iterated Prisoner's Dilemma with Noise. Only two agents scored higher than DBS, and both of them used a "master-and-slaves" strategy in which a large number of "slave" agents deliberately conspired to raise the score of a single "master" agent.


Resources

Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article
You are not logged in 
Contact 
©2002-2017 U.C. Regents