Overview
Group information  
Group name seminar
Title
Summary Design of Robotics and Embedded systems, Analysis, and Modeling Seminar
Description

Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)

Fall 2017

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Mondays from 4.10-5.00 p.m. in 250 Sutardja Dai Hall. DREAMS has joined forces with the Control Theory Seminar and the CITRIS People and Robots Seminar CPAR.

The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.

Information on the seminar series might be useful for potential speakers. If you have any questions about DREAMS, please contact Markus N. Rabe. If you want to subscribe to our mailing list, please drop me a line.

Seminars from previous semesters can be found here.

Schedule

Leila Takayama August 28, 2017
Julie Adams October 02, 2017
Hannah Stuart October 09, 2017
Eric Krotkov October 19, 2017
Mac Schwager October 30, 2017
Christoph Kirsch November 03, 2017
Stefano Carpin November 06, 2017
Oren Salzman November 13, 2017 UPCOMING
David Hsu November 16, 2017 UPCOMING
Rishabh Singh November 20, 2017 UPCOMING
Brian Gerkey November 27, 2017 UPCOMING
David Camarillo December 04, 2017 UPCOMING

On Being Re-Embodied as a Robot

Aug 28, 2017, 4-5pm, 250 Sutardja Daj Hall, Leila Takayama, UC Santa Cruz.

Slides

Abstract

Have you ever wondered what it’s like to be a robot? While others are wildly speculating about what the future of robots will look like, we actually already know quite a bit about what it’s like to live and work around robots. We also know a lot about what it’s like to telecommute to work everyday via telepresence robot. Coming from a human-robot interaction perspective, I’ll be sharing some of those experiences and lessons with you. Over the past several years, I’ve collaborated with remote colleagues via robotic telepresence systems that enabled them to drive themselves around the office, join in those impromptu hallway meetings, pounce on us when we didn’t respond to emails, and ultimately build stronger working relationships. I’ll present the research lessons learned from several years of fielding prototype telepresence robots in multiple companies and running quantitative user studies in the lab to figure out how to better support remote collaboration.

Bio:

NA


The road to adaptive robot teammates

Oct 02, 2017, 4-5pm, 250 Sutardja Daj Hall, Julie Adams, Oregon State University.

Slides

Abstract

Future human-robot teams need to function as effectively, preferably better than their equivalent human-only teams. Developing robot teammates that can contribute effectively to the team in order to elevate the entire team requires robots to understand and adapt to their human counterparts. This adaptive capability is particularly important in high-risk, uncertain and dynamic environments that expose humans to dangerous situations, such as first response to a large manmade disaster. Human performance can be negatively impacted in a significant manner in these more demanding scenarios. Robots, however, are not susceptible to these same performance degradations. Further, as robot capabilities improve, robots are will reason over team modifications, such as interaction methods or task assignments, in an optimized manner. Ultimately, robot teammates need to adapt to the changes in the human teammates’ performance in order to ensure the team succeeds. This presentation will provide a road map and our progress towards to developing such capabilities. In particular, how can human performance be modeled and objectively measured given the domain constraints, in what ways can human-robot teaming be adapted, and how can the measures of human performance be analyzed to identify overload and underload states in real-time.

Bio:

NA


Robust Robotic Hand Design for Remote Ocean Exploration

Oct 09, 2017, 4-5pm, 250 Sutardja Daj Hall, Hannah Stuart, Stanford University.

Slides

Abstract

Capable mobile manipulation will be a cornerstone for the future of marine robotics. Compliant underactuated hands are a useful solution for manipulation in unstructured environments, allowing a range of grasp types and affording physical robustness without the complexity of a fully-actuated design. However, these devices often trade off dexterity and simplicity. In this talk, I will address the design of a hand that is versatile and strong, while also being light and resilient. Ocean One is a submersible humanoid robot intended to bring intuitive telepresence to delicate subsea environments. Its adaptive, multi-finger, tendon-driven hands can perform both precision pinches and strong wrap grasps with a single actuator. This design was field-tested as part of Ocean One’s maiden voyage, during which it acquired a vase from the La Lune shipwreck site at a depth of 91m in the Mediterranean Sea. The hands use fingers with preloaded nonlinear flexure joints and a spring-loaded transmission for controllable grasp force distribution, such that the hand can be relatively soft for handling delicate objects and stiff for tasks requiring strength. We also explore the addition of gentle suction flow to the fingertips as a way to enhance grasp acquisition and stability and to sense contact. As Ocean One's hands are controlled from a haptic console, evaluation of performance is intended to guide teleoperator decisions.

Bio:

NA


Research and Development at the Toyota Research Institute

Oct 19, 2017, 4-5pm, 250 Sutardja Daj Hall, Eric Krotkov, Toyota.

Slides

Abstract

Toyota Research Institute (TRI), established in 2015, has four initial mandates: 1) enhance the safety of automobiles, 2) increase access to cars to those who otherwise cannot drive, 3) translate Toyota’s expertise in creating products for outdoor mobility into products for indoor mobility, and 4) accelerate scientific discovery by applying techniques from artificial intelligence and machine learning. TRI is based in the United States, with offices in Los Altos, Calif., Cambridge, Mass., and Ann Arbor, Mich. In this talk, Eric Krotkov will provide a high-level overview of the research and development activities at TRI. He will go on to present recent approaches and results in robotics, with a focus on manipulation.

Bio:

NA


Ants Don't Have WiFi: Enabling Robotic Agents to Collaborate and Compete without a Communication Network

Oct 30, 2017, 4-5pm, 250 Sutardja Daj Hall, Mac Schwager, Stanford.

Slides

Abstract

In the animal world there is no WiFi—agents collaborate and compete by sensing and predicting the actions of teammates, rivals, predators, and prey. Likewise, in the engineered world, many of the most promising applications for autonomous robots require them to interact with other agents in the world by sensing and predicting their actions. Autonomous driving in traffic, collision avoidance for UAVs, and human-robot teaming are key examples where a wireless network either cannot exist, or will not exist for some time. In competitive scenarios, such as racing or pursuit-evasion, agents would not want to communicate even if they could. In this talk I will describe several recent examples from my lab of algorithms enabling multiple robotic agents to interact, both collaboratively and competitively, without a communication network. I will discuss a communication-free multi-robot manipulation algorithm by which many simple robots cooperate to transport a payload too large for any one of them to move alone. I will describe a highly scalable collision avoidance strategy, and a related pursuit-evasion strategy, that only requires agents to sense the positions of nearby neighbors. Finally, I will present a game theoretic receding horizon control algorithm for autonomous drone racing, in which drones sense each other's position with a monocular camera. I will show results from hardware experiments with ground robots, scale autonomous cars, and quadrotor UAVs collaborating and competing in the scenarios above.

Bio:

Mac Schwager has been an assistant professor of Aeronautics and Astronautics at Stanford University since 2015. He obtained his BS degree from Stanford, and his MS and PhD degrees from MIT. He was a postdoctoral researcher in the GRASP lab at the University of Pennsylvania, and in CSAIL at MIT. Prior to joining Stanford, he was an assistant professor at Boston University from 2012 to 2015. He received the SAB best paper award in 2008, was a finalist for the ICRA best paper award in 2008, 2011, and 2016, and received the TRO best paper award in 2016. His research interests are in distributed algorithms for control, perception, and learning in groups of robots and animals.


Selfie and the Basics

Nov 03, 2017, 3-4pm, 540 Cory Hall, Christoph Kirsch, University of Salzburg.

Slides

Abstract

Imagine a world in which virtually everyone at least intuitively understands the fundamental principles of information and computation. In such a world computing would be as natural to people as using a calculator or making plans for the weekend. Computer science, however, is still a young field compared to others and lacks maturity, despite the enormous demand created by information technology. To address the problem we would like to encourage everyone in the computer science community to go back to their favorite topic and identify the absolute basics that they feel are essential for understanding the topic. We present here our experience in trying to do just that with programming languages and runtime systems as our favorite topic. We argue that understanding the construction of their semantics and the self-referentiality involved in that is essential for understanding computer science. We have developed selfie, a tiny self-compiling C compiler, self-executing MIPS emulator, and self-hosting MIPS hypervisor all implemented in a single, self-contained file using a tiny subset of C. Selfie has become the foundation of our classes on the design and implementation of programming languages and runtime systems. Teaching selfie has also helped us identify some of the absolute basics that we feel are essential for understanding computer science in general.

Bio:

Christoph Kirsch is Professor at the Department of Computer Sciences of the University of Salzburg, Austria. He received his Dr.Ing. degree from Saarland University, Saarbrücken, Germany, in 1999 while at the Max Planck Institute for Computer Science. From 1999 to 2004 he worked as Postdoctoral Researcher at the Department of Electrical Engineering and Computer Sciences of the University of California, Berkeley. He later returned to Berkeley as Visiting Scholar (2008-2013) and Visiting Professor (2014) at the Department of Civil and Environmental Engineering as part of a collaborative research effort in Cyber-Physical Systems. His research interests are in concurrent programming, memory management, virtualization, and automated theorem proving. Dr. Kirsch co-invented embedded programming languages and systems such as Giotto, HTL, and the Embedded Machine, and more recently co-designed high-performance, multicore-scalable concurrent data structures and memory management systems. He co-founded the International Conference on Embedded Software (EMSOFT) and served as ACM SIGBED chair from 2011 until 2013 and ACM TODAES associate editor from 2011 until 2014. He is currently associate editor of IEEE TCAD.


Risk aware multi-objective planning for mobile robotics

Nov 06, 2017, 4-5pm, 250 Sutardja Daj Hall, Stefano Carpin, UC Merced.

Slides

Abstract

Robotics is experiencing a period of explosive growth in academia and industry. As robots are assigned more and more complex tasks to be performed in a variety of situations, it becomes essential being able to pursue multiple objectives at once while coping with uncertainty and possible failures. In this talk I will present some of our recent results in risk-aware multi objective planning leveraging the theory of constrained Markov Decision Processes. I will show how this approach can be used to tackle a variety of problems, how to manage large state spaces, and how to close the loop between theory and applications.

Bio:

NA


The provable virtue of laziness in motion planning

Nov 13, 2017, 4-5pm, 250 Sutardja Daj Hall, Oren Salzman, Carnegie Mellon University.

Slides

Abstract

Laziness is defines as "the quality of being unwilling to work". It is a common approach used in many algorithms (and by many graduate students) where work, or computation, is delayed until absolutely necessary. In the context of motion planning, this idea has been frequently used to reduce the computational cost of testing if a robot collides with obstacles, an operation that governs the running time of many motion-planning algorithms. In this talk I will describe and analyze several algorithms that use this simple, yet effective idea, to dramatically improve over the state-of-the-art. A by-product of lazily performing collision detection is a shift in the computational weight in motion-planning algorithms from collision detection to nearest-neighbor search or to graph search. This induces new challenges which I will also address in my talk—Can we employ application-specific nearest-neighbor data structures tailored for lazy motion-planning algorithms? Do we need to be completely lazy (with respect to collision detection) or should we balance laziness with, say graph operations?

Bio:

NA


ROBOT-HUMAN ADAPTATION AND MUTUAL ADAPTATION

Nov 16, 2017, 4-5pm, 250 Sutardja Daj Hall, David Hsu, National University of Singapore.

Slides

Abstract

Early robots often occupied tightly controlled environments, e.g., factory floors, designed to segregate robots and humans for safety. In the near future, robots will "live" with humans, providing a variety of services at homes, in workplaces, or on the road. To become effective and trustworthy collaborators, robots must adapt to human preferences and more interestingly, adapt to changing human preferences, as humans adapt as well. I will discuss our recent work, covering mathematical models that leverage estimation of human intention for robot adaptation, planning algorithms that connect robot perception with decision making, and learning algorithms that enable robots to adapt to human preferences without a prior model. The discussion, I hope, will spur greater interest towards principled approaches that integrate perception, planning, and learning for fluid human-robot collaboration.

Bio:

David Hsu is a Professor in the Department of Computer Science at the National University of Singapore, and the Deputy Director of the Advanced Robotic Center. His research spans robotics and AI. Prof. Hsu received his Ph.D. in computer science from Stanford University, under the supervision of Jean-Claude Latombe, and was a member of the Robotics Laboratory. After leaving Stanford, he worked at Compaq Computer Corp.'s Cambridge Research Lab and The University of North Carolina at Chapel Hill. At the National University of Singapore, he held the Sung Kah Kay Assistant Professorship and was a Faculty Fellow of the Singapore-MIT Alliance. He has acted as chair and editor on a number of robotics and AI conferences and scientific journals.


Neural Program Synthesis

Nov 20, 2017, 4-5pm, 250 Sutardja Daj Hall, Rishabh Singh, MSR Redmond.

Slides

Abstract

The key to attaining general artificial intelligence is to develop architectures that are capable of learning complex algorithmic behaviors modeled as programs. The ability to learn programs allows these architectures to learn to compose high-level abstractions with complex control-flow, which can lead to many potential benefits: i) enable neural architectures to perform more complex tasks, ii) learn interpretable representations (programs which can be analyzed, debugged, or modified), and iii) better generalization to new inputs (like algorithms). In this talk, I will present some of our recent work in developing neural architectures for learning complex regular-expression based data transformation programs from input-output examples, and will also briefly discuss some other applications such as program fuzzing, repair, and optimization that can benefit from learning neural program representations.

Bio:

Rishabh Singh is a researcher in the Cognition group at Microsoft Research, Redmond. His research interests span the areas of programming languages, formal methods, and deep learning. His recent work has focused on developing new neural architectures for learning programs. He obtained his PhD in Computer Science from MIT in 2014, where he was a Microsoft Research PhD fellow and was awarded the MIT’s George M. Sprowls Award for Best PhD Dissertation. He obtained his BTech in Computer Science from IIT Kharagpur in 2008, where he was awarded the Institute Silver Medal and Bigyan Sinha Memorial Award.


NA

Nov 27, 2017, 4-5pm, 250 Sutardja Daj Hall, Brian Gerkey, Open Source Robotics Foundation.

Slides

Abstract

Bio:

NA


NA

Dec 04, 2017, 4-5pm, 250 Sutardja Daj Hall, David Camarillo, Stanford.

Slides

Abstract

Bio:

NA


Group type Software project Special interest group Administrative group
People involved in this group  
Administrators Christopher Brooks cxh cxh@eecs.berkeley.edu
Sadigh Dorsa dsadigh dsadigh@berkeley.edu
Markus Rabe rabe
Name hidden by user marys@eecs.berkeley.edu
Members Christopher Brooks cxh cxh@eecs.berkeley.edu
Daniel Bundala bundala
Sadigh Dorsa dsadigh dsadigh@berkeley.edu
Markus Rabe rabe
Name hidden by user marys@eecs.berkeley.edu
Group configuration  
Configuration options Is enabled Has members Has administrator
Fancy HTML Fix HTML Is searchable
Is advertised
Workspace options Home page Discussion forum Private forum
FAQ Member mail list Member notification list
Developer mail list VC mail list Public interest list
Public announce list VC module Bugs
Calendar src directory Notify Developers
Wiki Application Form Show Publications
HTML authoring None VC checkin
You are not logged in 
Contact 
©2002-2017 U.C. Regents