Forum
Presentations from Previous Semesters
Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article

Fall 2009 Seminars
Christopher Brooks, 11 Aug 2010
Last updated: 16 Dec 2011

Fall 2009 seminars

Extending Ptolemy to Support Software Model Variant and Configuration Management
Charles Shelton, Robert Bosch Research and Technology Center, December 15, 2009.
Time and location: 4-5pm (540 Cory)

Robert Bosch is a worldwide company that supplies components to every major automobile manufacturer. Bosch provides engine control systems, braking and steering systems, transmission control systems, and many other embedded software control systems for many different car models from many different manufacturers. To manage this complexity, Bosch uses product line architectures for most of its software systems. These software systems are first designed using a modeling tool called ASCET which uses an actor-oriented modeling paradigm similar to Ptolemy. Because of the multitude of combinations of models and manufacturers, variant and configuration management for all of these software models is a significant issue. Within the Bosch Research and Technology Center (RTC) we are using Ptolemy to explore potential solutions for managing this complexity. We will present an overview of past, present, and future challenges, including the work we have done in collaboration with the UC Berkeley Ptolemy research group. In addition, we are currently working with a student team in the Carnegie Mellon Master of Software Engineering (MSE) program to develop extensions to Ptolemy that provide support for storing and retrieving models from a relational database. The goal of this work is to build a basic infrastructure to support product line configuration and management in Ptolemy.

Active Traffic Management using Aurora Road Network Modeler
Alex A. Kurzhanskiy, University of California at Berkeley, December 08, 2009.
Time and location: 4-5pm (540 Cory)

Active Traffic Management (ATM) is the ability to dynamically manage recurrent and nonrecurrent congestion based on prevailing traffic conditions. Focusing on trip reliability, it maximizes the effectiveness and efficiency of freeway corridors. ATM relies on fast and trustworthy traffic simulation that can help assess a large number of control strategies for a road network, given various scenarios, in a matter of minutes. Effective traffic density estimation is crucial for the successful deployment of feedback congestion control algorithms. Aurora Road Network Modeler (RNM) is an open-source macro-simulation tool set for operational planning and management of freeway corridors, developed at Berkeley within the TOPL project. Aurora RNM employs Cell Transmission Model (CTM) for road networks extended to support multiple vehicle classes and signalized arterials. It allows dynamic filtering of measurement data coming from traffic sensors for the estimation of traffic density. In this capacity, it can be used for detection of faulty sensors. The virtual sensor infrastructure of Aurora RNM serves as an interface to the real world measurement devices, as well as a simulation of such measurement devices. In the seminar we shall also talk about the current situation in the traffic management in California, and discuss what can and needs to be done for its improvement. Aurora RNM homepage, TOPL project homepage

Distributed Coverage Control with On-Line Learning
Mac Schwager, Computer Science and Artificial Intelligence Lab, MIT, December 1, 2009.
Time and location: 4-5pm (540 Cory)

One of the fundamental problems of multi-robot control is how to deploy a group of robots optimally over an environment using only local sensory and communication information. This problem is called coverage control, and it is integral to distributed sensing, surveillance, data collection, and servicing tasks with multiple robots. In this talk I will present a unified design and analysis strategy for coverage control. Controllers are derived from the gradient of a general cost function, which can be specialized to represent a variety of coverage scenarios including probablistic scenarios, geometric Voronoi-based approaches, and artificial potential field-based approaches. I will also discuss coverage controllers that incorporate on-line learning using stable parameter adaptation with rigorous stability and performance guarantees. I will show that learning performance can be enhanced in a multi-robot context by using a consensus algorithm to propagate information throughout the robot network. Results from experiments with groups of ground and aerial robots will be presented.

Mobile Floating Sensor Network Placement using the Saint-Venant 1D Equation
Andrew Tinka, University of California Berkeley, November 24, 2009.
Time and location: 4-5pm (540 Cory)

Sensor packages that move within the system they measure pose new challenges for researchers. Sensors which float freely in a fluid environment, known in the hydrodynamics community as "Lagrangian drifters", are a growing tool for emergency response, environmental monitoring and hydrodynamic modeling. The placement problem requires finding a release policy that leads to a favorable spatial configuration of drifters within the domain of interest. The Saint-Venant equations, or Shallow Water equations, are a partial differential equation model of fluid flow that are well suited to rivers, estuaries, and other domains where drifters may be deployed. Approaching the Saint-Venant equations with different assumptions leads to varying degrees of computational complexity and model simplifications. Linearizations about uniform or non-uniform flows, which are of practical interest in the channel control community, are presented. Numerical simulation results will be presented, as well as preliminary results from a study conducted at the Hydraulic Research Unit in Stillwater, Oklahoma as part of the Department of Homeland Security's Rapid Repair of Levee Breaches program. The Berkeley floating sensor platform, which contains an embedded computer, GPS receiver, and a GSM cell phone module, will be presented.

High-Level Tasks to Correct Low-Level Robot Control
Hadas Kress-Gazit, Cornell University, November 17, 2009.
Time and location: 4-5pm (540 Cory)

Robots today can mop the floor, assist surgeons and explore space; however, there is no robot that could be trusted to drive autonomously in a real city. Robots either perform simple or hard-coded tasks fully autonomously or they operate with close human supervision. While most of the sensing and actuation technology required for high-level operation exists, what is lacking is the ability to plan at a high-level while providing guarantees for safety and correctness of a robot's autonomous behavior. In this talk I will present a formal approach to creating robot controllers that ensure the robot satisfies a given high level task. I will describe a framework in which a user specifies a complex and reactive task in Structured English. This task is then automatically translated, using temporal logic and tools from the formal methods world, into a hybrid controller. This controller is guaranteed to control the robot such that its motion and actions satisfy the intended task, in a variety of different environments.

Mobile Device Insights - Virtualization and Device Interoperability
Jörg Brakensiek, Nokia Research Center, November 10, 2009.
Time and location: 4-5pm (540 Cory)

Mobile Devices are becoming more powerful and computer-like devices. In my talk I will highlight two aspects of this developments, which I have recently worked on in my research. Virtualization is a well known technology in the server and desktop domain. Increasing openness of mobile device platforms, combined with strong requirements on security and content protection are paving the way to mobile virtualization technologies. In the talk, I will highlight the main challenges and promising approaches for mobile virtualization. Secondly, mobile devices are our companion anywhere at anytime, providing seamless access to content and services. But there are circumstances, where the use of mobile devices is limited or even prohibited, like driving in a car. In such cases the mobile device would benefit from functionality provided from the car. During the talk I will discuss the different aspects of this interoperability challenge and describe the chosen approach. The talk will include a demonstration of our latest research results, on how mobile device can effectively utilize car display and control elements to fully remote control the mobile device.

Understanding the Genome of Data Centers
Jie Liu, Microsoft Research, November 03, 2009.
Time and location: 4-5pm (540 Cory)

To meet the growing demand of Online Services, data centers consume billions of KWh electricity every year, and the number is expected to double in the next 5 years. A typical data center is operated conservatively and as a result almost half of the energy consumption goes into cooling, and power distribution, and idled servers. In this talk, I summarize a number of research efforts in the Data Center Genome project, which takes a data-driven approach to data center energy management, leveraging networked sensing and control technologies. We have designed and deployed wireless environmental sensors to monitor heat distribution in server rooms and software-based services to estimate server power consumption. We tackled the challenges of reliable wireless data collection, time series compression and analysis, and static/dynamic resource management. The findings are used to advance the way equipments are provisioned, loads are distributed, and systems are operated in datacenters.

Robust Control via Sign Definite Decomposition
Shankar P. Bhattacharyya, Department of Electrical Engineering, Texas A&M University, October 29, 2009.
Time and location: 4-5pm (400 Cory)

A sign definite decomposition technique is described that allows one to determine the positivity of a polynomial function over a hyperbox. This is used to develop an extension of Kharitonov's Theorem to determine the Hurwitz stability of a polynomial with coefficients that depend polynomially on interval parameters. This result can be used in an iterative and modular fashion to construct stabilizing sets for the fixed order multivariable control design problem. The result is illustrated by design examples.

Mathematical Equations as Executable Models of Mechanical Systems
Walid Taha, Rice University, October 22, 2009.
Time and location: 4-5pm (540 Cory)

Increasingly, hardware and software systems are being developed for applications where they must interact directly with a physical environment. This trend significantly complicates testing computational systems and necessitates modeling and simulation of physical environments. While numerous tools provide differing types of assistance in simulating physical systems, there are surprising gaps in the support that these tools provide. Focusing on mechanics as an important class of physical environment, we address two such gaps, namely, the poor integration between different existing tools and the performance limitations of mainstream symbolic computing engines. We combine our solutions to these problems to create a domain-specific language that embodies a novel approach to modeling mechanical systems that is natural to engineers. The new language, called Acumen, enables describing mechanical systems as mathematical equations. These equations are transformed by a fast, well-defined sequence of steps into executable code. Key design features of Acumen include support for acausal modeling, point-free (or implicit time) notation, efficient symbolic differentiation, and families of equations. Our experience suggests that Acumen provides a promising example of balancing the needs of the engineering problem solving process and the available computational methods.

Data-Driven Modeling of Dynamical Systems: Optimal Excitation Signals and Structured Systems
Bo Wahlberg, Automatic Control Lab and ACCESS, KTH, Stockholm, Sweden , October 13, 2009.
Time and location: 4-5pm (540 Cory)

The quality of an estimated model should be related to the specifications of the intended application. A classical approach is to study the �size� of the asymptotic covariance matrix (the inverse of the Fisher information matrix) of the corresponding parameter vector estimate. In many cases it is possible to design and implement external excitation signals, e.g. pilot signals in communications systems or input signals in control applications. The objective of this seminar is to present some recent advances in optimal experiment design for system identification with a certain application in mind. The idea is to minimize experimental costs (e.g. the energy of the excitation signal), while guarantying that the estimated model with a certain probability satisfies the specifications of the application. This will result in a convex optimization problem, where the optimal solution should reveal system properties important for the application while hiding irrelevant dynamics. A simple Finite Impulse Response (FIR) example will be used to illustrate the basic ideas. We will also study how this approach can be used for L_2 gain estimation, motivated by a stability analysis application using the small gain theorem. The second part of the seminar will consider how structural information can be used improve the quality of the estimated model, but also some fundamental limitations related to identification of structured models. We will focus on a simple FIR cascade system example to highlight some fundamental issues.This seminar is based on joint work with Håkan Hjalmarsson, KTH.

Simulating Print Service Provider Using Ptolemy II
Jun Zeng, Hewlett-Packard Laboratories, October 6, 2009.
Time and location: 4-5pm (540 Cory)

The digital transformation is having a profound impact on commercial print. It creates new print demands made possible by "every-page-is-different"; it holds the promise of enabling more efficient, dynamic, reconfigurable workflow; and it has inspired new business practice such as print-on-demand. Modeling and simulation can help to understand the potential of digital print in a quantitative way, and help to exploit the digital opportunities at both strategic and operational level. In this presentation we will describe our ongoing work of applying Ptolemy II to print factory simulation, example preliminary results, and path forward.

Robust Distributed Task Planning for Networked Agents
Han-Lim Choi, Massachusetts Institute of Technology, September 29, 2009.
Time and location: 4-5pm (540 Cory)

This talk discusses methodologies to perform robust distributed task planning for a heterogeneous team of agents performing cooperative missions such as coordinated search, acquisition, and track missions. We present the consensus-based bundle algorithm (CBBA) which is a decentralized cooperative iterative auction algorithm for assigning tasks to agents. CBBA uses two phases to achieve a conflict-free task assignment. The first phase consists of each agent generating a single ordered bundle of tasks by greedily selecting tasks. The second phase then resolves inconsistent or conflicting assignments with the objective of improving the global reward through a bidding process. A key feature of CBBA is that its consensus protocol aims at agreement on the winning bids and corresponding winning agents (i.e., consensus in the spaces of decision variables and objective function). This enables CBBA to create conflict-free solutions that are relatively robust to inconsistencies in the current situational awareness. Recent research has extended CBBA to handle more realistic multi-UAV operational complications such as logical couplings in missions, heterogeneity of teams, uncertainty in a dynamic environment, and obstacle regions in flight space. We also present experimental results on the RAVEN flight test facility.

User-generated 3-D Content: Personalizing Immersive Connected Experiences
Yimin Zhang, Intel Labs China, September 25, 2009.
Time and location: 11am-noon (540 Cory)

ICE (Immersive Connected Experiences) represents a future trend of bringing richness of visual computing to future internet applications (such as social networking, virtual world etc.). In this talk, Intel's ICE vision and research directions will be introduced briefly. Then a more in-depth introduction to the latest research results on user-generated 3-D content technology based on advanced computer vision algorithms developed at Intel Labs China will be given, including 3D object modeling, 3D avatar modeling/controlling and 3D mirror world navigation/generation. We hope this talk will give the audience overall understanding of the future trend of ICE, and what kinds of usages are enabled by latest research results, and what are the further research challenges we are facing.

The Relation of Spike Timing to Large-Scale LFP Patterns
Ryan Canolty, University of California, Berkeley, September 22, 2009.
Time and location: 4-5pm (540 Cory)

Two key questions drive the research discussed in this talk. The first targets computation in a local cortical area: how does a population of interconnected neurons coordinate their spiking activity when engaged in a particular functional operation? The second question is focused on long-range communication: how do widely-distributed brain regions rapidly form the transient functional networks needed to support complex perception, cognition, and action? One hypothesis is that a hierarchy of neuronal oscillations plays a key role in this coordination and multi-scale integration. The BMI paradigm, employing massively parallel microelectrode recordings from multiple brain areas over several months, is ideal for addressing these fundamental questions. Today�s talk highlights ongoing work investigating the relation of single neuron spike timing to the pattern of local field potentials (LFPs) in both local and distant cortical areas, and discusses how this spike/LFP mutual information may be used to predict spike timing and possibly improve BMI performance.

Concurrency and Scalability versus Fragmentation and Compaction with Compact-fit
Hannes Payer, University of Salzburg, September 15, 2009.
Time and location: 4-5pm (540 Cory)

We study, formally and experimentally, the trade-off in temporal and spatial performance when managing contiguous pieces of memory using the real-time memory management system Compact-fit (CF). The key property of CF is that temporal and spatial performance can be bounded, related, and predicted in constant time through the notion of partial and incremental compaction. Partial compaction determines the maximally tolerated degree of memory fragmentation. Incremental compaction, introduced here, determines the maximal amount of memory involved in any, logically atomic portion of a compaction operation. We evaluate experimentally different CF configurations and show which configurations scale under high memory allocation and deallocation load on multiprocessors and memory-constrained uniprocessor systems.

Avoiding Unbounded Priority Inversion in Barrier Protocols Using Gang Priority Management
Harald Roeck, University of Salzburg, September 15, 2009.
Time and location: 4-5pm (540 Cory)

Large real-time software systems such as real-time Java virtual machines often use barrier protocols, which work for a dynamically varying number of threads without using centralized locking. Such barrier protocols, however, still suffer from priority inversion similar to centralized locking. We introduce gang priority management as a generic solution for avoiding unbounded priority inversion in barrier protocols. Our approach is either kernel-assisted (for efficiency) or library-based (for portability) but involves cooperation from the protocol designer (for generality). We implemented gang priority management in the Linux kernel and rewrote the garbage collection safe-point barrier protocol in IBM�s WebSphere Real Time Java Virtual Machine to exploit it. We run experiments on an 8-way SMP machine in a multi-user and multi-process environment, and show that by avoiding unbounded priority inversion, the maximum latency to reach a barrier point is reduced by a factor of 5.3 and the application jitter is reduced by a factor of 1.5.

The Design, Modeling, and Control of Mechatronic Systems with Emphasis on Betterment of Quality of Human Life
Kyoungchul Kong, University of California, Berkeley, September 8, 2009.
Time and location: 4-5pm (540 Cory)

Mechatronic technologies are steadily penetrating in our daily lives. We are surrounded by mechatronic products and interact with them in many ways. In particular, mechatronic devices may potentially improve the quality of life of elderly people and patients with impairments. In this talk, several key technologies for effectively assisting humans are introduced. These technologies include 1) sensing technologies for identifying the intent of humans, 2) decision making algorithms to decide about the right amount of assistance, 3) actuation technologies to provide precise assistive forces, and 4) control algorithms for effectively assisting humans. As a sensing unit for assistance and rehabilitation, a sensor-embedded shoe that measures the ground contact forces is introduced in this presentation. Signal processing algorithms for the rehabilitation of patients with gait disorders are discussed as well. The design of controllers for assistive and rehabilitation systems is challenging due to the human factor in the control loop. In this talk, control algorithms are designed based on inspiration from a fictitious gain in the feedback loop in the human body and from aquatic therapy. The two proposed methods are effective for the purposes of assistance and rehabilitation, respectively. A robust control approach is applied in the design of the controllers for improved robustness to uncertainties in the human body properties

Large Monitoring Systems: Data Analysis, Design and Deployment
Ram Rajagopal, University of California, Berkeley, September 3, 2009.
Time and location: 4-5pm (400 Cory)

The emergence of pervasive sensing, high bandwidth communications and inexpensive data storage and computation systems makes it possible to drastically change how we design, monitor and regulate very large-scale physical and human networks. Performance gains in the way we operate these networks translate into large savings. There are many critical challenges to create functional monitoring systems, such as data reliability, computational efficiency and proper system design, including choices of sensors, communication protocols, and analysis approaches. In this talk I present a framework to design a monitoring system, deploy it, maintain it and process the incoming heterogeneous sources of information, resulting in new applications. The framework is being applied to urban traffic monitoring and road infrastructure sensing. We have developed various state of the art statistical algorithms, computed performance guarantees and studied some of the fundamental limits of the proposed ideas. I illustrate the methodology using experimental deployments we have built and that are currently in use. I also present some open questions and directions for future research.

Reasoning about Online Algorithms with Weighted Automata
Orna Kupferman, Hebrew University, August 25, 2009.
Time and location: 4-5pm (540 Cory)

We describe an automata-theoretic approach for the competitive analysis of online algorithms. Our approach is based on weighted automata, which assign to each input word a positive cost in R. By relating the ``unbounded look ahead'' of optimal offline algorithms with nondeterminism, and relating the ``no look ahead'' of online algorithms with determinism, we are able to solve problems about the competitive ratio of online algorithms, and the memory they require, by reducing them to questions about determinization and approximated determinization of weighted automata. Joint work with Benjamin Aminof and Robby Lampert.

Previous topic  |  This topic  |  Next topic
Previous article  |  This article  |  Next article
You are not logged in 
Contact 
©2002-2017 U.C. Regents