"Learning Controllers for Multi-Robot Teams"

Thursday, Mar. 14th @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/95959557868

Speaker: Gaurav S. Sukhatme

We have recently demonstrated the possibility of learning controllers that are zero-shot transferable to groups of real quadrotors via large-scale, multi-agent, end-to-end reinforcement learning. We train policies parameterized by neural networks that can control individual drones
in a group in a fully decentralized manner. Our policies, trained in simulated environments with realistic quadrotor physics, demonstrate advanced flocking behaviors, perform aggressive maneuvers in tight formations while avoiding collisions with each other, break and re-establish formations to avoid collisions with moving obstacles, and efficiently coordinate in pursuit-evasion tasks. The model learned in simulation transfers to highly resource-constrained physical quadrotors performing station-keeping and goal-swapping behaviors. Motivated by these results and the observation that neural control of memory-constrained, agile robots requires small yet highly performant models, the talk will conclude with some thoughts on coaxing learned models onto devices with modest computational capabilities.

Bio:

Gaurav S. Sukhatme is Professor of Computer Science and Electrical and Computer Engineering at the University of Southern California (USC) and an Amazon Scholar. He is the Director of the USC School of Advanced Computing and the Executive Vice Dean of the USC Viterbi School of Engineering. He holds the Donald M. Aldstadt Chair in Advanced Computing at USC. He was the Chairman of the USC Computer Science department from 2012-17. He received his undergraduate education in computer science and engineering at IIT Bombay and his M.S. and Ph.D. degrees in computer science from USC. Sukhatme is the co-director of the USC Robotics Research Laboratory and the USC Robotic Embedded Systems Laboratory director. His research interests are in networked robots, learning robots, and field robotics. He has published extensively in these and related areas. Sukhatme has served as PI on numerous NSF, DARPA, and NASA grants. He was a Co-PI on the Center for Embedded Networked Sensing (CENS), an NSF Science and Technology Center. He is a Fellow of the AAAI and the IEEE, a recipient of the NSF CAREER award, the Okawa Foundation research award, and an Amazon research award. He is one of the founders of the Robotics: Science and Systems conference. He was the program chair of the 2008 IEEE International Conference on Robotics and Automation and the 2011 IEEE/RSJ International Conference on Robots and Systems. He is currently the Editor-in-Chief of Autonomous Robots (Springer Nature) and has served in the past as Associate Editor of the IEEE Transactions on Robotics and Automation, the IEEE Transactions on Mobile Computing, and on the editorial board of IEEE Pervasive Computing.

"Computational Symmetry and Learning for Efficient Generalizable Algorithms in Robotics"

Thursday, Mar. 7th @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/95959557868

Speaker: Maani Ghaffari

Forthcoming mobile robots require efficient generalizable algorithms to operate in challenging and unknown environments without human intervention while collaborating with humans. Today, despite the rapid progress in robotics and autonomy, no robot can deliver human-level performance in everyday tasks and missions such as search and rescue, exploration, and environmental monitoring and conservation. In this talk, I will put forward a vision for enabling efficiency and generalization requirements of real-world robotics via computational symmetry and learning. I will walk you through structures that arise from combining symmetry, geometry, and learning in various foundational problems in robotics and showcase their performance in experiments ranging from perception to control. In the end, I will share my thoughts on promising future directions and opportunities based on lessons learned on the field and campus.

Bio

Maani Ghaffari received the Ph.D. degree from the Centre for Autonomous Systems (CAS), University of Technology Sydney, NSW, Australia, in 2017. He is currently an Assistant Professor at the Department of Naval Architecture and Marine Engineering and the Department of Robotics, University of Michigan, Ann Arbor, MI, USA, where he directs the Computational Autonomy and Robotics Laboratory (CURLY). His work on sparse, globally optimal kinodynamic motion planning on Lie groups received the best paper award finalist title at the 2023 Robotics: Science and Systems conference. He is the recipient of the 2021 Amazon Research Awards. His research interests lie in the theory and applications of robotics and autonomous systems.

"From Model-Based Whole-Body Control to Humanoid Legged Manipulation using ML"

Thursday, Feb. 22nd @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/95959557868

Speaker: Luis Sentis

Whole-Body Controllers (WBCs) have provided unique capabilities for human-centered robots such as humanoids and exoskeletons including transparent multicontact and dynamically and kinematically consistent motion trajectory tracking for safe, agile and accurate interactions. In combination with trajectory optimization predictive methods, WBCs have enabled outstanding agile behaviors. However, to operate in truly unstructured environments, machine learning techniques are needed to provide sophisticated manipulation and mobility policies for highly redundant robots (e.g. dual arm robots with anthropomorphic hands or full humanoid robots) that take as input the large state space given by exteroceptive sensors such as cameras and tactile sensors. In this seminar I will discuss the onset of WBCs and the current success for perceptive mobile dual arm manipulation using machine learning techniques. Experiments using humanoid robots will be shown.

Bio

Luis Sentis is a professor of aerospace engineering and engineering mechanics at the University of Texas at Austin. With a Ph.D. in Electrical Engineering from Stanford, he leads the Human Centered Robotics Laboratory, focusing on the design and control of robots that collaborate with humans. His work, funded by ONR, NASA, NSF, USSF, and others, has played a key role in shaping the current generation of humanoid robots. His upcoming role as chair-elect of Good Systems highlights his dedication to developing ethical, human-centered robotics.

"Deployable Robot Learning Systems"

Thursday, Feb. 15th @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/95959557868

Speaker: Zipeng Fu

The field of robotics has recently witnessed a significant influx of learning-based methodologies, revolutionizing areas such as manipulation, navigation, locomotion, and drone technology. This talk aims to delve into the forefront of robot learning systems, particularly focusing on their scalability and deployability to open-world problems, through two main paradigms of learning-based methods for robotics: reinforcement learning and imitation learning.

Bio:

Zipeng Fu is a CS PhD student at Stanford AI Lab, advised by Chelsea Finn. His research focuses on deployable robot systems and learning in the unstructured open world. His representative work includes Mobile ALOHA, Robot Parkour Learning, and RMA, receiving CoRL 2023 & 2022 Best System Finalist awards. His research is supported by Stanford Graduate Fellowship as a Pierre and Christine Lamond Fellow. Previously, he was a student researcher at Google DeepMind. He completed his master's at CMU and bachelor’s at UCLA.

Homepage: zipengfu.github.io/

"Breaking the Curse of Dimensionality in POMDPs with Sampling-based Online Planning"

Thursday, Feb. 8th @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/95959557868

Speaker: Zachary Sunberg

Partially observable Markov decision processes (POMDPs) are flexible enough to represent many types of probabilistic uncertainty making them suitable for real-world decision and control problems. However, POMDPs are notoriously difficult to solve. Even for discrete state and observation spaces, decision problems related to finite horizon POMDPs are PSPACE-complete. Worse, real-world problems often have continuous (i.e. uncountably infinite) state, action, and observation spaces. In this talk, I will demonstrate that, surprisingly, there are online algorithms that can find arbitrarily close-to-optimal policies with no direct theoretical complexity dependence on the size of the state or observation spaces. Specifically, these algorithms use particle filtering and the sparse sampling approach which previously provided similar guarantees for MDPs. Although the theoretical results are much too loose to be used directly for safety guarantees, I will demonstrate how the approach can be used to construct algorithms that are very scalable in practice, for example planning with learned models and high-dimensional image observations and constructing safe and efficient plans in POMDPs with hundreds of dimensions using shielding. I will also discuss challenges in extending this approach further to multi-agent settings.

Bio

Zachary Sunberg is an Assistant Professor in the Ann and H.J. Smead Aerospace Engineering Sciences Department and director of the Autonomous Decision and Control Lab. He earned Bachelors and Masters degrees in Aerospace Engineering from Texas A&M University and a PhD in Aeronautics and Astronautics at Stanford University in the Stanford Intelligent Systems Lab. Before joining the University of Colorado faculty, he served as a postdoctoral scholar at the University of California, Berkeley in the Hybrid Systems Lab. His research is focused on safe and efficient operation of autonomous vehicles and systems on the ground, in the air, and in space. A particular emphasis is on handling uncertainty using the partially observable Markov decision process and game formalisms.

"Signal to Symbols (via Skills)"

Thursday, Jan. 25th @ 11am

FAH 3002 and Zoom: https://ucsd.zoom.us/j/95959557868

Speaker: George Konidaris

While AI has achieved expert-level performance on many 
individual tasks, progress remains stalled on designing a single agent 
capable of reaching adequate performance on a wide range of tasks. A 
major obstacle is that general-purpose agents (most generally, robots) 
must operate using sensorimotor spaces complex enough to support the 
solution to all possible tasks they may be given, which by the same 
token drastically hinder their effectiveness for any one specific task.

I propose that a key, and understudied, requirement for general 
intelligence is the ability of an agent to autonomously formulate 
streamlined, task-specific representations, of the sort that single-task 
agents are typically assumed to be given. I will describe my research on 
this question, which has established a formal link between the skills 
(abstract actions) available to a robot and the symbols (abstract 
representations) it should use to plan with them. I will present an 
example of a robot autonomously learning a (sound and complete) abstract 
representation directly from sensorimotor data, and then using it to 
plan. I will also discuss ongoing work on making the resulting 
abstractions practical and portable across tasks.

Bio:

George Konidaris is an Associate Professor of Computer Science at 
Brown and the Chief Roboticist of Realtime Robotics, a startup 
commercializing his work on hardware-accelerated motion planning. He 
holds a BScHons from the University of the Witwatersrand, an MSc from 
the University of Edinburgh, and a PhD from the University of 
Massachusetts Amherst. Prior to joining Brown, he held a faculty 
position at Duke and was a postdoctoral researcher at MIT. George is the 
recent recipient of an NSF CAREER award, young faculty awards from DARPA 
and the AFOSR, and the IJCAI-JAIR Best Paper Prize.

"Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World"

Thursday, Jan. 18th @ 11am

FAH 3002 and Zoom: https://ucsd.zoom.us/j/95057481896

Speaker:  Tanmay Gupta

Can we train real-world robotic policies without RL, human supervision, or even real-world training? Could exploration emerge from imitating shortest-path planners? Can powerful visual encoders help bridge the sim2real gap? How efficient is the training process? How does the performance scale with the number and diversity of training episodes? What are the crucial architecture design choices? In this talk, we will answer these questions and more!  

Bio: 

Tanmay Gupta is a research scientist in the PRIOR team at the Allen Institute for Artificial Intelligence (AI2). Tanmay received his PhD from UIUC, where he was advised by Prof. Derek Hoiem and closely collaborated with Prof. Alex Schwing. Tanmay has received the CVPR 2023 Best Paper Award for his work on Visual Programming; Outstanding Reviewer Awards at ICCV 2023, CVPR 2021, and ECCV 2020; and served as an Area Chair for CVPR 2024, CVPR 2023, and NeurIPS 2023.

"Discussion of Next US Robotics Roadmap"

Thursday, Jan. 11th @ 11am

FAH 3002 / Zoom: https://ucsd.zoom.us/j/95057481896

Moderator: Henrik Christensen 

The US National Robotics Roadmap is published every four years. The preparation of the 2024 version of the roadmap is to be published March/April. In this session, we will present the background of the roadmap and have an open discussion - what are interesting opportunities for robotics? what makes it hard, and what are needed R&D issues that must be addressed. We will partly review ad-hoc meetings but also have an open discussion about your perspective on these questions. This is your chance to provide input to the next roadmap document.
 

"Rapidly Adaptive Locomotion Algorithms for Bipeds and Beyond"

Thursday, Nov. 30th @ 11am

FAH 3002 or Zoom (LINK)

Speaker:  Christian Hubicki

Creating machines that move as agilely and adaptively as animals in our world has been a persistent control challenge for roboticists. Effective bipedal control must both reconcile the complex multibody dynamics of robots while thinking quickly how to respond and adapt to changing environments and scenarios. We present control algorithms for capturing the agility, efficiency, robustness, and adaptability of biological locomotors and showcase results on dynamic mobile robots from bipeds to UAVs to all-terrain UGVs. Specifically, our group uses a variety of optimization techniques to generate and adapt behaviors on-the-fly (>100Hz). Further, our portfolio of techniques points toward a unifying risk-based control framework that rapidly and autonomously reprioritizes locomotion during robot operation. We believe that this fast and emergent prioritization is critical for real-world-reliable mobile robots -- allowing robots to, literally and metaphorically, think on their feet.

Bio:

Christian Hubicki is an Assistant Professor of Mechanical Engineering at the FAMU-FSU College of Engineering.  His group’s research at the Optimal Robotics Laboratory specializes in legged robotics, applied optimal control, biomechanical modeling, and fast algorithms for adaptive robot behaviors. He earned both his bachelor's and master’s degrees in mechanical engineering from Bucknell University, earned his dual-degree PhD in Robotics and Mechanical Engineering at Oregon State University, and completed his postdoctoral work in the Mechanical Engineering and Physics departments at the Georgia Institute of Technology. His research awards include a Best Technical Paper Finalist at ICRA 2016, Best Paper Winner in 2019 from IEEE Robotics and Automation Magazine, Outstanding Locomotion Paper Winner at ICRA 2022, and a Young Faculty Researcher Grant from the Toyota Research Institute in 2021. His work has been featured at the National Academy of Engineering’s Gilbreth Lecture Series, the TEDx lecture series, and in media outlets from the Science Channel to CBS’s “The Late Show with Stephen Colbert.”
 

"Unlocking Agility, Safety, and Resilience for Legged Navigation: A Task and Motion Planning Approach"

Wednesday. Nov. 15th @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/94698958840

Speaker:  Ye Zhao

While legged robots have made remarkable progress in dynamic balancing and mobility, there remains substantial room for improvement in terms of navigation and decision-making capabilities. One major challenge stems from the difficulty of designing safe, resilient, and real-time planning and decision-making frameworks for these complex legged machines navigating unstructured environments. Symbolic planning and distributed trajectory optimization offer promising yet underexplored solutions. This talk will introduce three perspectives on enhancing safety and resilience in task and motion planning (TAMP) for agile legged locomotion. First, we'll discuss hierarchically integrated TAMP for dynamic locomotion in environments susceptible to perturbations, focusing on robust recovery behaviors. Next, we'll cover our recent work on safe and socially acceptable legged navigation planning in environments that are partially observable and crowded with humans. Lastly, we'll delve into distributed contact-aware trajectory optimization methods achieving dynamic consensus for agile locomotion behaviors.

Bio: 

Ye Zhao is an Assistant Professor at The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology. He received his Ph.D. degree in Mechanical Engineering from The University of Texas at Austin in 2016. After that, he was a Postdoctoral Fellow at Agile Robotics Lab, Harvard University. At Georgia Tech, he leads the Laboratory for Intelligent Decision and Autonomous Robots. His research interest focuses on planning and decision-making algorithms of highly dynamic and contact-rich robots. He received the George W. Woodruff School Faculty Research Award at Georgia Tech in 2023, NSF CAREER Award in 2022, and ONR Young Investigator Program Award in 2023. He serves as an Associate Editor of IEEE Transactions on Robotics, IEEE Robotics and Automation Letters, and IEEE Control Systems Letters. His co-authored work has received multiple paper awards, including the 2021 ICRA Best Automation Paper Award Finalist and the 2016 IEEE-RAS Whole-Body Control Best Paper Award Finalist.

"Towards Robust and Autonomous Locomotion in Cluttered Terrain Using Insect-Scale Robots"

Thursday, Nov. 9th @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/94698958840

Speaker:  Dr. Kaushik Jayaram

Animals such as mice, cockroaches and spiders have the remarkable ability to maneuver through challenging cluttered natural terrain and have been inspiration for adaptable legged robotic systems. Recent biological research further indicates that body reorientation along pathways of minimal energy is a key factor influencing such locomotion. We propose to extend this idea by hypothesizing that body compliance of soft bodied animals and robots might be an alternate yet effective locomotion strategy to squeeze through cluttered obstacles. We present some early results related to the above using Compliant Legged Autonomous Robotic Insect (CLARI), our novel, insect-scale, origami-based quadrupedal robot. While the distributed compliance of such soft-legged robots enables them to explore complex environments, their gait design, control, and motion planning is often challenging due to a large number of unactuated/underactuated degrees of freedom. Towards addressing this issue, we present a geometric motion planning framework for autonomous, closed kinematic chain articulated systems that is computationally effective and has a promising potential for onboard and real-time gait generation.

Bio: 

Dr. Kaushik Jayaram is presently an Assistant Professor in Robotics at the Paul M Rady Department of Mechanical Engineering at the University of Colorado Boulder. Previously, he was a post-doctoral scholar in Prof. Rob Wood's Microrobotics lab at Harvard University. He obtained his doctoral degree in Integrative Biology in 2015 from the University of California Berkeley mentored by Prof. Bob Full and undergraduate degree in Mechanical Engineering from the Indian Institute of Technology Bombay in 2009, with interdisciplinary research experiences at the University of Bielefeld, Germany, and Ecole Polytechnique Federale de Lausanne, Switzerland. Dr. Jayaram's research combines biology and robotics to, uncover the principles of robustness that make animals successful at locomotion in natural environments, and, in turn, inspire the design of the next generation of novel robots for effective real-world operation. His work has been published in a number of prestigious journals and gained significant popular media attention. Besides academic research, Dr. Jayaram’s group is actively involved in several outreach activities that strive toward achieving diversity, equity and inclusivity in STEM.

"Learning and Control for Safety, Efficiency, and Resiliency of Embodied AI"

Thursday, Nov. 2nd @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/94698958840

Speaker:  Fei Miao 

With rapid evolution of sensing, communication, and computation, integrating learning and control presents significant Embodied AI opportunities. However, current decision-making frameworks lack comprehensive understanding of the tridirectional relationship among communication, learning and control, posing challenges for multi-agent systems in complex environments. In the first part of the talk, we focus on learning and control with communication capabilities. We design an uncertainty quantification method for collaborative perception in connected autonomous vehicles (CAVs). Our findings demonstrate that communication among multiple agents can enhance object detection accuracy and reduce uncertainty. Building upon this, we develop a safe and scalable deep multi-agent reinforcement learning (MARL) framework that leverages shared information among agents to improve system safety and efficiency. We validate the benefits of communication in MARL, particularly in the context of CAVs in challenging mixed traffic scenarios. We incentivize agents to communicate and coordinate with a novel reward reallocation scheme based on Shapley value for MARL. Additionally, we present our theoretical analysis of robust MARL methods under state uncertainties, such as uncertainty quantification in the perception modules or worst-case adversarial state perturbations. In the second part of the talk, we briefly outline our research contributions on robust MARL and data-driven robust optimization for sustainable mobility. We also highlight our research results concerning CPS security. Through our findings, we aim to advance Embodied AI and CPS for safety, efficiency, and resiliency in dynamic environments.

Bio:

Fei Miao is Pratt & Whitney Associate Professor of the Department of Computer Science and Engineering, a Courtesy Faculty of the Department of Electrical & Computer Engineering, University of Connecticut, where she joined in 2017. She is affiliated to the Institute of Advanced Systems Engineering and Eversource Energy Center. She was a postdoc researcher at the GRASP Lab and the PRECISE Lab of Upenn from 2016 to 2017. She received a Ph.D. degree and the Best Doctoral Dissertation Award in Electrical and Systems Engineering, with a dual M.S. degree in Statistics from the University of Pennsylvania in 2016. She received the B.S. degree in Automation from Shanghai Jiao Tong University in 2010. Her research focuses on multi-agent reinforcement learning, robust optimization, uncertainty quantification, and game theory, to address safety, efficiency, robustness, and security challenges of Embodied AI and CPS, for systems such as connected autonomous vehicles, sustainable and intelligent transportation systems, and smart cities. Dr. Miao is a recipient of the NSF CAREER award and a couple of other awards from NSF. She received the Best Paper Award and Best Paper Award Finalist at the 12th and 6th ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS) in 2021 and 2015, Best paper Award at the 2023 AAAI DACC workshop, respectively.

"Control Principles for Robot Learning"

Thursday, October 26th @ 11am

FAH 3002 / Zoom https://ucsd.zoom.us/j/94698958840

Speaker:  Todd Murphey

Embodied learning systems rely on motion synthesis to enable efficient and flexible learning during continuous online deployment. Motion motivated by learning needs can be found throughout natural systems, yet there is surprisingly little known about synthesizing motion to support learning for robotic systems. Learning goals create a distinct set of control-oriented challenges, including how to choose measures as objectives, synthesize real-time control based on these objectives, impose physics-oriented constraints on learning, and produce analyses that guarantee performance and safety with limited knowledge. In this talk, I will discuss learning tasks that robots encounter, measures for information content of observations, and algorithms for generating action plans. Examples from biology and robotics will be used throughout the talk and I will conclude with future challenges.

Bio:

Todd Murphey is a Professor of Mechanical Engineering in the McCormick School of Engineering and of Physical Therapy and Human Movement Sciences in the Feinberg School of Medicine, both at Northwestern University. He received his Ph.D. in Control and Dynamical Systems from the California Institute of Technology. His laboratory is part of the Center for Robotics and Biosystems, and his research interests include robotics, control, human-machine interaction, and emergent behavior in dynamical systems. He received the National Science Foundation CAREER award, was a member of the 2014-2015 DARPA/IDA Defense Science Study Group, and is a current member of the United States Department of the Air Force Scientific Advisory Board. 

"CRI-Seminar 12 October - 3-Minute Madness"

Thursday, 12 October @ 11am

FAH 3002 - Zoom https://ucsd.zoom.us/j/94698958840

We will do the "3-minute" madness presentation of example research from faculty members of the Contextual Robotics Institute. Every faculty member will have 3 minutes to presenter their research group.

So far we have:

  • Henrik Christensen - Cognitive Robotics & Autonomous Vehicles
  • Nikolay Atanasov - Existential Robotics Lab
  • Sean Gao - Automation Algorithms Group
  • Sylvia Herbert - Reachability for Safety Analysis and Control
  • Mike Tolley - Bioinspired Robotics and Design Lab
  • Eduard Artz - Bioinspired gripping for robotics and handling

Presentations will be in FAH 3002 and on zoom https://ucsd.zoom.us/j/94698958840

"Digitizing Touch Sense: Unveiling the Perceptual Essence of Tactile Textures"

Wednesday, June 21st @ 2pm

Franklin Antonio Hall Room 4002 (in person only)

Speaker:  Yasemin Vardar

Imagine you could feel your pet's fur on a Zoom call, the fabric of the clothes you are considering purchasing online, or tissues in medical images. We are all familiar with the impact of digitization of audio and visual information in our daily lives - every time we take videos or pictures on our phones. Yet, there is no such equivalent for our sense of touch. This talk will encompass my scientific efforts in digitizing naturalistic tactile information for the last decade. I will explain the methodologies and interfaces we have been developing with my team and collaborators for capturing, encoding, and recreating the perceptually salient features of tactile textures for active bare-finger interactions. I will also discuss current challenges, future research paths, and potential applications in tactile digitization. 

Bio:

Yasemin Vardar is an Assistant Professor in the Department of Cognitive Robotics at the Delft University of Technology (Netherlands), where she directs the Haptic Interface Technology Lab. She was previously a postdoctoral researcher at the Max Planck Institute for Intelligent Systems (Germany); she earned her Ph.D. in mechanical engineering from Koç University (Turkey) in 2018. Her research interests focus on understanding human touch and developing haptic interface technologies. Her awards include the 2021 NWO VENI Grant, the 2018 Eurohaptics Best Ph.D. Thesis Award, and the Best Poster Presentation Award at IEEE WHC. She is currently a co-chair of the Technical Committee on Haptics.

"A Robot Just for You: — Personalized Human-Robot Interaction and the Future of Work and Care"

Thursday, June 8th @ 11am

FAH 3002 - Zoom https://ucsd.zoom.us/j/92099821298 

Speaker:  Maja Matarić

As robots become part of our lives, we demand that they understand us, predict our needs and wants, and adapt to us as we change our moods and minds, learn, grow, and age. The nexus created by recent major advances in machine learning for machine perception, navigation, and natural language processing has enabled human-robot interaction in real-world contexts, just as the need for human services continues to grow, from elder care to nursing to education and training. This talk will discuss our research that brings robotics together with machine learning for user modeling, multimodal behavioral signal processing, and affective computing in order to enable robots to understand, interact, and adapt to users’ specific and ever-changing needs. The talk will cover methods and challenges of using multi-modal interaction data and expressive robot behavior to monitor, coach, motivate, and support a wide variety of user populations and use cases. We will cover insights from work with users across the age span (infants, children, adults, elderly), ability span (typically developing, autism, stroke, Alzheimer’s), contexts (schools, therapy centers, homes), and deployment durations (up to 6 months), as well as commercial implications.

Bio:

Maja Matarić is the Chan Soon-Shiong Distinguished Professor of Computer Science, Neuroscience, and Pediatrics at the University of Southern California (USC), and founding director of the USC Robotics and Autonomous Systems Center. Her PhD and MS are from MIT, BS from Kansas University. She is Fellow of AAAS, IEEE, AAAI, and ACM, recipient of the Presidential Award for Excellence in Science, Mathematics & Engineering Mentoring, Anita Borg Institute Women of Vision for Innovation, NSF Career, MIT TR35 Innovation, and IEEE RAS Early Career Awards, and authored "The Robotics Primer" (MIT Press). She leads the USC Viterbi K-12 STEM Center and actively mentors K-12 students, women and other underrepresented groups toward pursuing STEM careers. Her university leadership includes serving as vice president of research (2020-21) and as engineering vice dean of research (2006-19). A pioneer of the field of socially assistive robotics, her research is developing human-machine interaction methods for personalized support in convalescence, rehabilitation, training, and education for autism spectrum disorders, stroke, dementia, anxiety, and other major health and wellness challenges.

"Perception-Action Synergy in Unknown or Uncertain Environments"

Thursday, May 18th @ 11am

FAH 3002 - Zoom https://ucsd.zoom.us/j/92099821298

Speaker: Jing Xiao, WPI

Many robotic applications require a robot to operate in an environment with unknowns or uncertainty, at least initially, before it gathers enough information about the environment. In such a case, a robot must rely on sensing and perception to feel its way around. Moreover, perception and motion need to be coupled synergistically in real time, such that perception guides motion, while motion enables better perception. In this talk, I will introduce our research in combining perception and motion of a robot to achieve autonomous manipulation of deformable linear objects, contact-rich, complex assembly, and semantic object search in unknown or uncertain environments, under RGBD or force/torque sensing. I will also introduce human interaction to facilitate robotic assembly and constrained manipulation by providing unknown information to the robot conveniently.

Bio:

Jing Xiao received her Ph.D. degree in Computer, Information, and Control Engineering from the University of Michigan, Ann Arbor, Michigan. She is the Deans’ Excellence Professor, William B. Smith Distinguished Fellow in Robotics Engineering, Professor and Head of the Robotics Engineering Department, Worcester Polytechnic Institute (WPI). She joined WPI as the Director of the Robotics Engineering Program in 2018 from the University of North Carolina at Charlotte, where she received the College of Computing Outstanding Faculty Research Award in 2015. She led the Robotics Engineering Program to become the Robotics Engineering Department in July 2020. Jing Xiao is an IEEE Fellow. Her research spans robotics, haptics, and intelligent systems. She has co-authored a monograph and published extensively in major robotics journals, conferences, and books. Jing Xiao is a recipient of the 2022 IEEE Robotics and Automation Society George Saridis Leadership Award in Robotics and Automation.

"Next-Generation Robot Perception: Hierarchical Representations, Certifiable Algorithms, and Self-Supervised Learning"

Thursday, May 11th, 11:00am to 12:00pm

Location: SME Bldg, 2nd Floor, ASML Conference Center, Room 248                                                                                                                                                            Zoom - https://ucsd.zoom.us/j/92099821298

Speaker:  Luca Carlone, Massachusetts Institute of Technology, Aero-Astro and LIDS

Spatial perception-- the robot's ability to sense and understand the surrounding environment-- is a key enabler for robot navigation, manipulation, and human-robot interaction. Recent advances in perception algorithms and systems have enabled robots to create large-scale geometric maps of unknown environments and detect objects of interest. Despite these advances, a large gap still separates robot and human perception: Humans are able to quickly form a holistic representation of the scene that encompasses both geometric and semantic aspects, are robust to a broad range of perceptual conditions, and are able to learn without low-level supervision. This talk discusses recent efforts to bridge these gaps. First, we show that scalable metric-semantic scene understanding requires hierarchical representations; these hierarchical representations, or 3D scene graphs, are key to efficient storage and inference, and enable real-time perception algorithms. Second, we discuss progress in the design of certifiable algorithms for robust estimation; in particular we discuss the notion of "estimation contracts", which provide first-of-a-kind performance guarantees for estimation problems arising in robot perception. Finally, we observe that certification and self-supervision are twin challenges, and the design of certifiable perception algorithms enables a natural self-supervised learning scheme; we apply this insight to 3D object pose estimation and present self-supervised algorithms that perform on par with state-of-the-art, fully supervised methods, while not requiring manual 3D annotations. 

Bio:

Luca Carlone is the Leonardo Career Development Associate Professor in the Development of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the Best Student Paper Award at IROS 2021, the Best Paper Award in Robot Vision at ICRA 2020, a 2020 Honorable Mention from the IEEE Robotics and Automation Letters, a Track Best Paper award at the 2021 IEEE Aerospace Conference, the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the Best Paper Award at WAFR 2016, the Best Student Paper Award at the 2018 Symposium on VLSI Circuits, and he was the best paper finalist at RSS 2015, RSS 2021, and WACV 2023. He is also a recipient of the AIAA Aeronautics and Astronautics Advising Award (2022), the NSF CAREER Award (2021), the RSS Early Career Award (2020), the Google Daydream (2019), the Amazon Research Award (2020, 2022), and the MIT AeroAstro Vickie Kerrebrock Faculty Award (2020). He is an IEEE senior member and an AIAA associate fellow. At MIT, he teaches "Robotics: Science and Systems," the introduction to robotics for MIT undergraduates, and he created the graduate-level course "Visual Navigation for Autonomous Vehicles", which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.

"Review of the Mid-Cycle Update to the National Robotics Roadmap"

Thursday, April 27th @ 11am

Franklin Antonio Hall 3002 - Zoom https://ucsd.zoom.us/j/92099821298

Speaker: Henrik I. Christensen

The US National Robotics Roadmap is published as a strategy for opportunities and challenges in robotics every 4-years. Due to significant changes post-covid, new US administration, global mega-trends a revision of the strategy was recently published. In this presentation we will discuss the process, societal mega-trends, how they impact robotics and research challenges in robotics. We will also have a brain storm about fundamental challenges in robotics.

Roadmap Document: https://cra.org/ccc/wp-content/uploads/sites/2/2023/04/Robotics-Mid-Cycle-White-Paper.pdf

"Learning Contact and Pressure for Grasping"

Thursday, Jan. 12th @ 11am PST

CSE 4258 and Zoom - https://ucsd.zoom.us/j/94448213760

Speaker: James Hayes, Georgia Tech 

Human hands are amazing. They can grasp and manipulate complex objects. They can apply pressure with strength or with subtlety. We'd like robots to be to perform manipulation like humans can, and we'd like robots to understand how humans are using their hands in shared environments. In this talk I'll present a line of research that measures the contact and pressure of human hands by using thermal imagery and pressure sensors, then tries to learn to predict these attributes from various modalities, and finally tries to transfer some of the findings to robots.

Bio:

James is an associate professor of computing at Georgia Institute of Technology. From 2017 to 2022 he was also a principal scientist at Argo AI. Previously, James was the Manning assistant professor of computer science at Brown University. James received his Ph.D. from Carnegie Mellon University and was a postdoc at Massachusetts Institute of Technology. His research interests span computer vision, robotics, and machine learning. His research often involves finding new data sources to exploit (e.g. geotagged imagery, thermal imagery) or creating new datasets where none existed (e.g. human sketches, HD maps). James is the recipient of the NSF CAREER award and Sloan Fellowship.

"Robot Imagination: Affordance-Based Reasoning about Unknown Objects"

Monday, Dec. 5th @ 11am PST

ATK 4004 / Zoom - https://ucsd.zoom.us/j/95963442435

Speaker:   Gregory S. Chirikjian
                  Dept. of Mechanical Engineering
                  National University of Singapore

Today’s robots are very brittle in their intelligence. This follows from a legacy of industrial robotics where
robots pick and place known parts repetitively. For humanoid robots to function as servants in the home
and in hospitals they will need to demonstrate higher intelligence, and must be able to function in ways
that go beyond the stiff prescribed programming of their industrial counterparts. A new approach to
service robotics is discussed here. The affordances of common objects such as chairs, cups, etc., are
defined in advance. When a new object is encountered, it is scanned and a virtual version is put into a
simulation wherein the robot ``imagines’’ how the object can be used. In this way, robots can reason
about objects that they have not encountered before, and for which they have no training using. Videos of
physical demonstrations will illustrate this paradigm, which the presenter has developed with his students
Hongtao Wu, Meng Xin, Sipu Ruan, and others.

Bio:

Gregory S. Chirikjian received undergraduate degrees from Johns Hopkins University in 1988, and a
Ph.D. degree from the California Institute of Technology, Pasadena, in 1992. From 1992 until 2021, he
served on the faculty of the Department of Mechanical Engineering at Johns Hopkins University, attaining
the rank of full professor in 2001. Additionally, from 2004-2007, he served as department chair. Starting
in January 2019, he moved the National University of Singapore, where he is serving as Head of the
Mechanical Engineering Department, where he has hired 14 new professors so far. Chirikjian’s research
interests include robotics, applications of group theory in a variety of engineering disciplines, applied
mathematics, and the mechanics of biological macromolecules. He is a 1993 National Science
Foundation Young Investigator, a 1994 Presidential Faculty Fellow, and a 1996 recipient of the ASME Pi
Tau Sigma Gold Medal. In 2008, Chirikjian became a fellow of the ASME, and in 2010, he became a
fellow of the IEEE. From 2014-15, he served as a program director for the US National Robotics Initiative,
which included responsibilities in the Robust Intelligence cluster in the Information and Intelligent
Systems Division of CISE at NSF. Chirikjian is the author of more than 250 journal and conference
papers and the primary author of three books, including Engineering Applications of Noncommutative
Harmonic Analysis (2001) and Stochastic Models, Information Theory, and Lie Groups, Vols. 1+2. (2009,
2011). In 2016, an expanded edition of his 2001 book was published as a Dover book under a new title,
Harmonic Analysis for Engineers and Applied Scientists.

"Understanding the Utility of Haptic Feedback in Telerobotic Devices"

Thursday, Dec.1st @ 11am PST

CSE 2154 / Zoom - https://ucsd.zoom.us/j/92050012028

Speaker:  Jeremy D. Brown

The human body is capable of dexterous manipulation in many different environments. Some environments, however, are challenging to access because of distance, scale, and limitations of the body itself. In many of these situations, access can be effectively restored via a telerobot. Dexterous manipulation through a telerobot is possible only if the telerobot can accurately relay any sensory feedback resulting from its interactions in the environment to the operator. In this talk, I will discuss recent work from our lab focused on the application of haptic feedback in various telerobotic applications. I will begin by describing findings from recent investigations comparing different haptic feedback and autonomous control approaches for upper-extremity prosthetic limbs, as well as the cognitive load of haptic feedback in these prosthetic devices. I will then discuss recent discoveries on the potential benefits of haptic feedback in robotic minimally invasive surgery (RAMIS) training. Finally, I will discuss current efforts in our lab to measure haptic perception through novel telerobotic interfaces

Bio:

Jeremy D. Brown is the John C. Malone Assistant Professor in the Department of Mechanical Engineering at Johns Hopkins University where he directs the Haptics and Medical Robotics (HAMR) Laboratory. He is also a member of the Laboratory for Computational Sensing and Robotics (LCSR) and the Malone Center for Engineering in Healthcare. Prior to joining Hopkins, Jeremy was a Postdoctoral Research Fellow at the University of Pennsylvania in the Haptics Research Group, which is part of Penn’s General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory. He received his M.S. and Ph.D. degrees in Mechanical Engineering at the University of Michigan. He also holds B.S. degrees in Applied Physics and Mechanical Engineering from Morehouse College and the University of Michigan, respectively as a graduate of the Atlanta University Center’s Dual Degree Engineering Program. Brown’s team uses methods from human perception, motor control, neurophysiology, and biomechanics to study the human perception of touch, especially as it relates to applications of human-robot interaction and collaboration. Brown’s work has appeared in a number of peer-reviewed journals, including the Journal of Neuroengineering and Rehabilitation, IEEE Transactions on Haptics, IEEE Transactions on Biomedical Engineering, and IEEE Transactions on Neural Systems and Rehabilitation Engineering, and the IEEE Transactions on Medical Robotics and Bionics.

"Using Data for Increased Realism and Immersion with Haptic Modeling and Devices"

Thursday, November 17th @ 11am PST

CSE 2154 / Zoomhttps://ucsd.zoom.us/j/92050012028

Speaker:  Heather Culbertson

As technology advances, more of our daily lives are spent online and in front of screens. However, the digital interactions remain unsatisfying and limited, representing the human as having only two sensory inputs: visual and auditory. We have learned to adapt to using digital devices, communicating through keyboards, mice, and touchscreens, but these input methods are unnatural and provide limited information to the user. In this talk I will discuss our methods for creating more realistic and immersive virtual interactions using our unique approach that integrates design, mechatronics, and neuroscience. Our approach combines machine learning and data recorded from real-world interactions with objects with the goal of creating virtual objects that are indistinguishable from real life. This talk will cover the data-processing, algorithms, and hardware needed to model and render virtual objects and interactions for a variety of scenarios. I will also discuss current challenges and future directions in the field of haptics.

Bio:

Heather Culbertson is a Gabilan Assistant Professor of Computer Science at the University of Southern California. Her research focuses on the design and control of haptic devices and rendering systems, human-robot interaction, and virtual reality. Particularly she is interested in creating haptic interactions that are natural and realistically mimic the touch sensations experienced during interactions with the physical world. Previously, she was a research scientist in the Department of Mechanical Engineering at Stanford University. She received her PhD in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania in 2015. She is currently serving as Co-Chair for the IEEE Haptics Symposium. Her awards include the NSF CAREER Award, IEEE Technical Committee on Haptics Early Career Award, and Best Paper at UIST.

"Challenges and Opportunities for Autonomous Robots in the Wild"

Thursday, November 10th @ 11am PST

Zoom: https://ucsd.zoom.us/j/92050012028 (and viewing CSE 2154)

Speaker:  Chris Heckman

When we in the robotics research community think of what we'd like autonomous agents to tackle in the future, we often target "dull, dirty, and dangerous" tasks. However, despite a sustained boom in robotics research over the last decade, the number of places we've seen robotics in use for these tasks has been uninspiring. Successful commercialization of autonomous robots have required significant human scaffolding through teleoperation, and incredible amounts of capital, to achieve, and despite this are still limited by brittle systems and hand-engineered components. The reality seems to be that these tasks are not nearly as dull as they might seem on the surface, and instead require ingenuity for success some small but critical fraction of the time. In this talk, I focus on my recent investigation into where the limits of autonomy are for the highly sought-after application to subterranean emergency response operations. This application was motivated by the DARPA Subterranean Challenge, which just last year concluded with the CU Boulder team "MARBLE" taking third place and winning a $500,000 prize. In this talk, I will give an overview into the genesis of our solution over three years of effort, especially with respect to mobility, autonomy, perception, and communications. I'll also discuss the implications for present-day robotic autonomy and where we go from here.

Bio:

Chris Heckman is an Assistant Professor in the Department of Computer Science at the University of Colorado at Boulder and the Jacques I. Pankove Faculty Fellow in the College of Engineering & Applied Science. He is also a Visiting Academic with Amazon Scout, where he is addressing last-mile delivery by developing autonomous robots on sidewalks. He earned his BS in Mechanical Engineering from UC Berkeley in 2008 and his PhD in Theoretical and Applied Mechanics from Cornell University in 2012, where he was an NSF Graduate Research Fellow. He had postdoctoral appointments at the Naval Research Laboratory in Washington, DC as an NRC Research Associate, and at CU Boulder as a Research Scientist, before joining the faculty there in 2016.

"Origami-Inspired Design for Compliant and Reconfigurable Robots"

Thursday, November 3rd @ 11am PDT

CSE 2154 / Zoom https://ucsd.zoom.us/j/92050012028

Speaker:  Cynthia Sung

Recent years have seen a large interest in soft robotic systems, which provide new opportunities for machines that are flexible, adaptable, safe, and robust. In this talk, I will share efforts from my group to use origami-inspired design approaches to create compliant robots capable of executing a variety of shape-changing and dynamical tasks. I will show how the kinematics and compliance of a mechanism can be designed to produce a particular mechanical response, how we can leverage these designs for better performance and simpler control, and how we approach these problems computationally to design new robots with capabilities such as hopping, swimming, and flight.

Bio:

Cynthia Sung is the Gabel Family Term Assistant Professor in the Department of Mechanical Engineering and Applied Mechanics (MEAM) and a member of the General Robotics, Automation, Sensing & Perception (GRASP) lab at the University of Pennsylvania. She received a Ph.D. in Electrical Engineering and Computer Science from MIT in 2016 and a B.S. in Mechanical Engineering from Rice University in 2011. Her research interests are computational methods for design automation of robotic systems, with a particular focus on origami-inspired and compliant robots. She is the recipient of a 2023 ONR Young Investigator award, a 2020 Johnson & Johnson Women in STEM2D Scholars Award, and a 2019 NSF CAREER award.

Websitesung.seas.upenn.edu

 

"From Physical Reasoning to Image-based Reactive Manipulation"

Thursday, October 27th @ 11am PDT

Zoom: https://ucsd.zoom.us/j/92050012028 (and viewing CSE 2154)

Speaker:  Marc Toussaint

Our work on Task and Motion Planning (TAMP) in the last years provided some insights on how complex manipulation problems could in principle be tackled. However, it also raises classical questions: 1) How to reactively control and execute robotic manipulation? 2) How to realize physical reasoning and manipulation planning based on perception rather than a known mesh-based scene? 3) How to leverage learning from a model-based solver? I will discuss our team's recent research on these questions.

Bio:

Marc Toussaint is professor for Intelligent Systems at TU Berlin since March 2020 and was Max Planck Fellow at the MPI for Intelligent Systems 2018-21. In 2017/18 he spend a year as visiting scholar at MIT, before that some months with Amazon Robotics, and was professor for Machine Learning and Robotics at the University of Stuttgart since 2012. In his view, a key in understanding and creating strongly generalizing intelligence is the integration of learning and reasoning. His research combines AI planning, optimization, and machine learning to tackle fundamental problems in robotics. His work was awarded best paper at R:SS'18 and ICMLA'07, and runner up at R:SS'12 and UAI'08.

"Visual Recognition Beyond Appearances, and its Robotic Applications"

Thursday, October 13th @ 11am PDT

CSE 2154 / Zoom https://ucsd.zoom.us/j/92050012028

Speaker:  Yezhou (YZ) Yang

The goal of Computer Vision, as coined by Marr, is to develop algorithms to answer What are Where at When from visual appearance. The talk will explore the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. The talk will give an overview of a broad set of efforts ranging from 1) reasoning beyond appearance for visual question answering, image/video captioning tasks, and their evaluation, through 2) counterfactual reasoning about implicit physical properties, to 3) their roles in a Robotic visual learning framework via a Robotic indoor object search task.

Bio:

Yezhou (YZ) Yang is an Associate Professor and a Fulton Entrepreneurial Professor in the School of Computing and Augmented Intelligence (SCAI) at Arizona State University. He founded and directs the ASU Active Perception Group, and serves as the topic lead (situation awareness) at the Institute of Automated Mobility, Arizona Commerce Authority. He also serves as an area lead at Advanced Communications Technologies (ACT, a Science and Technology Center under the New Economy Initiative, Arizona). His work includes exploring visual primitives and representation learning in visual (and language) understanding, grounding them by natural language and high-level reasoning over the primitives for intelligent systems, secure/robust AI, and fair V&L model evaluation. Yang is a recipient of the Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018, and the Amazon AWS Machine Learning Research Award 2019. He receives his Ph.D. from the University of Maryland at College Park, and his B.E. from Zhejiang University, China. He is a co-founder of ARGOS Vision.

"Robots That Aren't Afraid to Harness Contact"

Thursday, October 6th @ 11am PDT

CSE 2154 / Zoom https://ucsd.zoom.us/j/92050012028

Speaker:  Dr. Hannah Stuart

For robots to perform helpful manual tasks, they must be able to physically interact with the real-world. The ability of robots to grasp and manipulate often depends on the strength and reliability of contact conditions, e.g. friction. In this talk, I will introduce how my lab is developing tools for "messy" or adversarial contact conditions -- granular/rocky media, fluids, human interaction -- to support the design of more capable systems. Developing models of contact enables parametric studies that can powerfully inform mechanical design of robots. Coupled with prototyping and experimental exploration, we generate new systems that better embody desired capabilities. 

In particular, we are creating grippers, skins, tactile sensors, and wearables for the hands -- focusing on the point of contact. In this talk, I will draw upon recent examples including how we are (1) harnessing fluid flow in soft grippers to improve and monitor grasp state in unique ways and (2) modeling granular interaction forces to support new single- and multi-agent capabilities in loose terrains.

Biosketch

Dr. Hannah Stuart is the Don M. Cunningham Assistant Professor in the Department of Mechanical Engineering at the University of California at Berkeley. She received her BS in Mechanical Engineering at the George Washington University in 2011, and her MS and PhD in Mechanical Engineering at Stanford University in 2013 and 2018, respectively. Her research focuses on understanding the mechanics of physical interaction in order to better design systems for dexterous manipulation. Applications range from remote robotics to assistive orthotics to tactile sensor and skin design. Recent awards include the NASA Early Career Faculty grant and Johnson & Johnson Women in STEM2D grant.

"Robot Learning: Towards Real World Applications"

Thursday, May 19th @ 11AM PDT (Virtual)

Zoom:  https://ucsd.zoom.us/j/96737331505

Speaker:  Stefan Schaal, Google X, Intrinsic

Machine learning for robotics, particularly in the context of deep learning and reinforcement learning, has demonstrated remarkable results in recent years. From the viewpoint of reliability and robustness of performance, however, there is a significant gap between improving a robotic skill, i.e. to just get better, vs. actually achieving reliability, which is often expressed in success rates clearly above 99%. From experience in the past, reliability requires to exploit the best of all ingredients that are needed in a robotic skill, e.g., including control, perception (vision, force, tactile, etc.), and machine learning. This talk will touch on various related topics that we have been pursuing in recent years, including interesting challenge domains for robot manipulation, exploiting impedance control for contact rich manipulation, deep learning for various perception tasks, and meta reinforcement learning to learn new manipulation skills in fractions of an hour with close to 100% success rate.

Bio: 

Stefan Schaal is a German-American computer scientist specializing in robotics, machine learning, autonomous systems, and computational neuroscience. Stefan held position at MIT, Georgia Tech, ATR before joining USC as a faculty member. He was also a founding director of the Max-Planck Institute for Intelligent Systems in Tubingen. Since 2018 he is a leader of a robotics research team at Google X. Stefan Schaal's interests focus on autonomous perception-action-learning systems, in particular anthropomorphic robotic systems. He works on topics of machine learning for control, control theory, computational neuroscience for neuromotor control, experimental robotics, reinforcement learning, artificial intelligence, and nonlinear dynamical systems.

"Motion Planning Around Obstacles with Convex Optimization"

Thursday, May 12th @ 11am PDT

This is a hybrid seminar with in-person presentation in EBU-1 Qualcomm Large Conference Room (first floor) and

Zoom Linkhttps://ucsd.zoom.us/j/96958677518

Speaker:  Russ Tedrake

In this talk, I'll describe a new approach to planning that strongly leverages both continuous and discrete/combinatorial optimization.  The framework is fairly general, but I will focus on a particular application of the framework to planning continuous curves around obstacles. Traditionally, these sort of motion planning problems have either been solved by trajectory optimization approaches, which suffer with local minima in the presence of obstacles, or by sampling-based motion planning algorithms, which can struggle with derivative constraints and in very high dimensions.  In the proposed framework, called Graph of Convex Sets (GCS), we can recast the trajectory optimization problem over a parametric class of continuous curves into a problem combining convex optimization formulations for graph search and for motion planning.  The result is a non-convex optimization problem whose convex relaxation is very tight — to the point that we can very often solve very complex motion planning problems to global optimality using the convex relaxation plus a cheap rounding strategy.  I will describe numerical experiments of GCS applied to a quadrotor flying through buildings and robotic arms moving through confined spaces.  On a seven-degree-of-freedom manipulator, GCS can outperform widely-used sampling-based planners by finding higher-quality trajectories in less time, and in 14 dimensions (or more) it can solve problems to global optimality which are hard to approach with sampling-based techniques.

Joint work with Tobia Marcucci, Mark Petersen, David von Wrangel, and Pablo Parrilo

Bio:

Russ is the Toyota Professor of Electrical Engineering and Computer ScienceAeronautics and Astronautics, and Mechanical Engineering at MIT, the Director of the Center for Robotics at the Computer Science and Artificial Intelligence Lab, and the leader of Team MIT's entry in the DARPA Robotics Challenge. Russ is also the Vice President of Robotics Research at the Toyota Research Institute. He is a recipient of the 2021 Jamieson Teaching Award, the NSF CAREER Award, the MIT Jerome Saltzer Award for undergraduate teaching, the DARPA Young Faculty Award in Mathematics, the 2012 Ruth and Joel Spira Teaching Award, and was named a Microsoft Research New Faculty Fellow.

Russ received his B.S.E. in Computer Engineering from the University of Michigan, Ann Arbor, in 1999, and his Ph.D. in Electrical Engineering and Computer Science from MIT in 2004, working with Sebastian Seung. After graduation, he joined the MIT Brain and Cognitive Sciences Department as a Postdoctoral Associate. During his education, he has also spent time at Microsoft, Microsoft Research, and the Santa Fe Institute.

"Beyond Recommender Engines and Image Recognition: Applications of AI/ML in the Department of Defense (DOD)"

Thursday, May 5th @ 11am

We will be hybrid with presentations in Atkinson Hall Room 4004 and also have a

Zoom option
https://ucsd.zoom.us/j/94186731965

Speakers:  Rey Nicolas and Dr. MacAllister, General Atomics Aeronautical Systems

This is a presentation on Key AI/ML technologies that General Atomics Aeronautical (GA-ASI) is developing to solve tough DoD problems utilizing the current and next generation GA-ASI Unnamed Aerial Vehicles (UAVs).

Rey Nicolas, Director of Software, Autonomy and AI Solutions leads General Atomics Aeronautical System’s Autonomy and AI teams developing next generation Autonomous Command and Control (C2) systems, Autonomous Processing, Exploitation, Dissemination (PED) systems, and AI edge processing for flight and sensor autonomy, air-to-air combat, and air-to-ground Intelligence, surveillance, and Intelligence (ISR) missions. Rey is also an Executive Committee Leader for SAE Standard Works that is developing AI in Aviation Safety Standards. Previously, Rey worked for Intel Corporation and Motorola/Google leading AI R&D and product development for multiple product lines in the smart and connected home, autonomous driving, and healthcare applications. Rey has published over 10+ research publications in Deep Learning. Rey holds a bachelor’s degree in Mechanical Engineering from University of California San Diego, Master’s in Computer Science from the University of Illinois, and MBA from San Diego State University.

Dr. MacAllister is an AI/ML Solutions Architect and Sr. AI/ML Manager for General Atomics Aeronautical Systems. Her work focuses on prototyping novel machine learning algorithms, developing machine learning algorithms using sparse, heterogenous or imbalanced data sets, and exploratory data analytics. Currently, Dr. MacAllister is leading the exploration of novel air to air combat strategies though faster than real time simulation and Reinforcement Learning agents for General Atomics Advanced Programs division. Throughout her career she has provided key contributions in a number of interdisciplinary areas such as prognostic health management, human performance augmentation, advanced sensing, and artificial intelligence aids for future warfare strategies. This work has received several awards and commendations from internal technical review bodies. Dr. MacAllister has published over two dozen peer reviewed technical papers, conference proceedings, and journal articles across an array of emerging technology concepts. In addition, throughout her career, she has worked on a number of critical defense programs such as the F-35 Joint Strike Fighter, DARPA Air Combat Evolution (ACE) and the General Atomics Predator family of systems. She also serves on the organizing committee of the Interservice/Industry Training, Simulation & Education Conference (I/ITSEC). Dr. MacAllister holds a bachelor's degree from Iowa State University (ISU) in Mechanical Engineering. She also holds a Master's and PhD from ISU in Mechanical Engineering and Human-Computer Interaction

"Towards Safe, Data-Driven Autonomy"

Thursday, April 28th @ 11am PDT (Virtual)

Zoom:  https://ucsd.zoom.us/j/98641522776

Speaker:  Dr. Marco Pavone, Stanford University

AI-powered autonomous vehicles that can learn, reason, and interact with people are no longer science fiction. Self-driving cars, unmanned aerial vehicles, and autonomous spacecraft, among others, are continually increasing in capability and seeing incremental deployment in more and more domains. However, fundamental research questions still need to be addressed in order to achieve full and widespread vehicle autonomy. In this talk, I will discuss our work on addressing key open problems in the field of vehicle autonomy, particularly in pursuit of safe, data-driven autonomy stacks. Specifically, I will discuss (1) robust human prediction models for both simulation and real-time decision making, (2) AI safety frameworks for autonomous systems, and (3) novel, highly integrated autonomy architectures that are amenable to end-to-end training while retaining a modular, interpretable structure. The discussion will be grounded in autonomous driving and aerospace robotics applications.

Bio:

Dr. Marco Pavone is an Associate Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory and Co-Director of the Center for Automotive Research at Stanford. He is currently on a partial leave of absence at NVIDIA serving as Director of Autonomous Vehicle Research. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on self-driving cars, autonomous aerospace vehicles, and future mobility systems. He is a recipient of a number of awards, including a Presidential Early Career Award for Scientists and Engineers from President Barack Obama, an Office of Naval Research Young Investigator Award, a National Science Foundation Early Career (CAREER) Award, a NASA Early Career Faculty Award, and an Early-Career Spotlight Award from the Robotics Science and Systems Foundation. He was identified by the American Society for Engineering Education (ASEE) as one of America's 20 most highly promising investigators under the age of 40.

"On the Question of Representations in Robotics"

Thursday, April 21st @ 11am PDT (Virtual)

Zoom:  https://ucsd.zoom.us/j/92110938804

Speaker:  Jeannette Bohg, Professor for Robotics, Stanford University

General-purpose autonomous robots cannot be preprogrammed for all the tasks they will be required to do. Just like humans, they should be able to acquire new skills and adapt existing ones to novel tasks and contexts. One approach is to develop general-purpose, task-independent representations of semantic entities needed in these aforementioned tasks. Representations like this promise effectiveness through compression of a lot of information into a single symbol, which then is able to generalise to all possible situations and contexts.

However, in my work I have yet to encounter general-purpose, task-independent representations that enable robust decision-making on a robot. Even if working with learned representations, the learning process, objectives and model architectures are at least in part task-specific.

In this talk, I will discuss three of my research projects that each employ different representation learning techniques to enable physical interaction with an environment. First, I will discuss our work that uses self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of deep reinforcement learning.  I present experiments on a peg insertion task where the learned representation generalises over different geometry, configurations, and clearances, while enabling the policy to being robust to external perturbations. Second, I will discuss our recent work on learning to manipulate deformable objects where we represent the visual data using learned key point representations. Specifically, we show how these key points enable learning from human video demonstrations in an active learning setting. And third, I will present work on using an environment representation that is learned through self-supervision for closed-loop navigation. Specifically, I will show how we can use Neural Radiance fields to enable closed-loop navigation of a robot equipped only with an RGB camera.

The commonality in all these works is that they employ learned representations trained through self-supervision. I will close the talk by discussing the lessons learned, opportunities I see and open questions on what desirable characteristics of representations in robotics may be.

Bio:

Jeannette Bohg is a Professor for Robotics at Stanford University. She is also directing the Interactive Perception and Robot Learning Lab. In general, her research explores two questions: What are the underlying principles of robust sensorimotor coordination in humans, and how we can implement them on robots? Research on this topic has to necessarily be at the intersection of Robotics, Machine Learning and Computer Vision. In her lab, they are specifically interested in Robotic Grasping and Manipulation.

"Online Mobile Robot Motion Planning and Trajectory Control Under Uncertainty in Partially-known Environments"

Thursday 7th April 2022 @ 11AM PST in Atkinson Hall, Room 5004 and on Zoom (i.e. in person with a zoom backup)

Zoom Meeting:  https://ucsd.zoom.us/j/97819733919

Speaker:  Dr. Konstantinos Karydis, UCR 

Mobile robot motion planning and trajectory tracking control under uncertainty is a challenging yet rewarding foundational robotics research problem with extensive applications across domains including intelligence, surveillance and reconnaissance (ISR), remote sensing, and precision agriculture.  One important challenge is operation in unknown or partially-known environments where planning decisions and control input adaptation need to be made at run-time.  In this talk we consider how to address two specific cases:  1) How to achieve resolution-complete field coverage considering the non-holonomic mobility constraints in commonly-used vehicles (e.g., wheeled robots) without prior information about the environment?  2) How to develop resilient, risk-aware and collision-inclusive planning and control algorithms to enable (collision-resilient) mobile robots to deliberately choose when to collide with locally-sensed obstacles to improve some performance metrics (e.g., total time to reach a goal). 

To this end, we have proposed a hierarchical, hex-decomposition-based coverage planning algorithm for unknown, obstacle-cluttered environments.  The proposed approach ensures resolution-complete coverage, can be tuned to achieve fast exploration, and plans smooth paths for Dubins vehicles to follow at constant velocity in real-time.  Our approach can successfully trade-off between coverage and exploration speed, and can outperform existing online coverage algorithms in terms of total covered area or exploration speed according to how it is tuned.  Further, we have introduced new sampling- and search-based online collision-inclusive motion planning algorithms for impact-resilient robots, that can explicitly handle the risk of colliding with the environment and can switch between collision avoidance and collision exploitation.  Central to the planners’ capabilities is a novel joint optimization function that evaluates the effect of possible collisions using a reflection model.  This way, the planner can make deliberate decisions to collide with the environment if such collision is expected to help the robot make progress toward its goal.  To make the algorithm online, we present state expansion pruning techniques that can significantly reduce the search space while ensuring completeness.

Bio:

Dr. Karydis is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of California, Riverside (UCR).  Before joining UCR, he worked as a Post-Doctoral Researcher in Robotics in GRASP Lab, which is part of the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania (Penn).  His work was supported by Dr. Vijay Kumar, Professor and Nemirovsky Family Dean of Penn Engineering.  He completed his doctoral studies in the Department of Mechanical Engineering at the University of Delaware, under the guidance of Prof. Herbert Tanner and Prof. Ioannis Poulakakis.

Dr. Karydis’s research program addresses foundational robotics research problems underlying applications in emergency response, environmental sensing, precision agriculture, and (more recently) pediatric rehabilitation.  His research seeks to enable existing and new robot embodiments (and teams thereof) to operate in efficient and resilient manners autonomously and/or in cooperation with humans despite the presence of uncertainty associated with action, perception, and the operating environment.

"Animal and Robot Locomotion in Complex 3-D Terrain"

Thursday, March 31 @ 11am PST (Virtual)

Join Zoom Meeting
https://ucsd.zoom.us/j/94266860659

Faculty Host: Nick Gravish

Mobile robots must move through complex environments in many applications. A highly successful approach is to create geometric maps of the environment and plan and follow a collision-free, safe trajectory to avoid sparse large obstacles, as exemplified by self-driving cars. However, many crucial applications require robots to traverse complex 3-D terrain with cluttered obstacles as large as themselves, such as search and rescue in rubble, environmental monitoring in forests and mountains, and sample collection in extraterrestrial rocks. Although many animals traverse such complex 3-D terrain at ease, bio-inspired robots are still poor at doing so. This is largely due to a lack of fundamental understanding of how to use and control physical interaction with such terrain to move through. Here, I review my lab’s progress towards filling this gap by integrating animal experiments, robot experiments, and physics modeling.

In the first half of the talk, I will discuss work on locomotor transitions of insects and multi-legged robots. Previous studies of multi-legged locomotion largely focused on how to generate near-steady-state walking and running on relatively flat surfaces and how to stabilize it when perturbed. By contrast, multi-legged locomotion in complex 3-D terrain occurs by stochastically transitioning across stereotyped locomotor modes. A potential energy landscape approach revealed surprising general principles for a diversity of locomotor challenges encountered in such terrain—why these modes occur, and how to transition across them.

In the second half, I will discuss work on limbless locomotion of snakes and snake robots. Previous studies of limbless locomotion largely focused on how to move on relatively flat surfaces with sparse vertical structures as anchor points for stability or push points for propulsion. By contrast, limbless locomotion in 3-D terrain benefits from coordinated lateral and vertical body bending, which not only helps conform to the terrain for stability, but also provides access to many more push points for propulsion.

For both directions, we are currently working on how to sense physical interaction during locomotion and use feedback control to enable autonomous locomotion across complex 3-D terrain.

Bio:

Chen Li is an Assistant Professor in the Department of Mechanical Engineering and a faculty member of Laboratory for Computational Sensing and Robotics at Johns Hopkins University. He earned B.S. and PhD degrees in physics from Peking University and Georgia Tech, respectively, and performed postdoctoral research in Integrative Biology and Robotics at UC Berkeley as a Miller Fellow. Dr. Li’s research aims at creating the new field of terradynamics, analogous to aero- and hydrodynamics, at the interface of interface of biology, robotics, and physics, and using terra-dynamics to understand animal locomotion and advance robot locomotion in the real world.

Dr. Li won several early career awards, including a Burroughs Wellcome Fund Career Award at the Scientific Interface, a Beckman Young Investigator Award, and an Army Research Office Young Investigator Award, and selection as a Kavli Frontiers of Science Fellow by the National Academy of Sciences. He has won a Best PhD Thesis award at Georgia Tech and several best student/highlight/best paper awards (Society for Integrative & Comparative Biology, Bioinspiration & Biomimetics, Advanced Robotics, IROS).

To learn more, visit Terradynamics Lab at: https://li.me.jhu.edu/.

"Towards Generalizable Autonomy: Duality of Discovery & Bias"

Monday, March 14 @ 11am PST

CSE 1242 &

Zoom: https://ucsd.zoom.us/j/97569479664

Password: 064751

Faculty Host: Henrik Christensen

Generalization in embodied intelligence, such as in robotics, requires interactive learning across families of tasks is essential for discovering efficient representation and inference mechanisms. Concurrent systems need a lot of hand-holding to even learn a single cognitive concept or a dexterous skill, say “open a door”, let alone generalizing to new windows and cupboards! This is far from our vision of everyday robots! would require a broader concept of generalization and continual update of representations.
My research vision is to build the Algorithmic Foundations for Generalizable Autonomy, which enables robots to acquire skills in cognitive planning & dexterous interaction, and, in turn, seamlessly interact & collaborate with humans. This study of the science of embodied AI opens three key questions: (a) Representational biases & Causal inference for interactive decision making, (b) Perceptual representations learned by and for interaction, (c) Systems and abstractions for scalable learning.

This talk will focus on decision-making uncovering the many facets of inductive biases in off-policy reinforcement learning in robotics. I will introduce C-Learning to trade off-speed and reliability instead of vanilla Q-Learning. Then I will talk about the discovery of latent causal structure to improve sample efficiency. Moving on from skills, we will describe task graphs for hierarchically structured tasks for manipulation. I will present how to scale structured learning in robot manipulation with Roboturk, and finally, prescribe a practical algorithm for deployment with safety constraints. Taking a step back, I will end with notions of structure in Embodied AI for both perception and decision making.

Biosketch:

Animesh Garg is a CIFAR Chair Assistant Professor of Computer Science at the University of Toronto, a Faculty Member at the Vector Institute and the Robotics Institute. Animesh earned a Ph.D. from UC Berkeley and was a postdoc at the Stanford AI Lab. His work aims to build Generalizable Autonomy which involves a confluence of representations and algorithms for reinforcement learning, control, and perception. In particular, he currently studies three aspects: learning structured inductive biases in Sequential decision making, using data-driven causal discovery, and transfer to real robots: all in the purview of embodied systems.

His work has earned many best-paper recognitions at top-tier venues in Machine Learning and Robotics such as ICRA, IROS, RSS, Hamlyn Symposium, Workshops at NeurIPS, ICML.

Homepage: http://animesh.garg.tech/

"Continuum Robots for Assistive Perception and Situational Awareness Augmentation: Applications in Surgery and Collaborative Manufacturing"

CRI Seminar - Thursday, March 3rd @ 11AM PST

Join URLhttps://ucsd.zoom.us/j/99055507432

Speaker:  Nabil Simaan, Ph.D.
                 Dept. of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee

Hosted by:  Michael Yip

Emerging surgical paradigms such as natural orifice surgery and minimally invasive surgery in deep surgical sites present new challenges to surgeons and engineers. These new challenges stem from the limitations of surgeon’s sensing, perception, and incomplete situational awareness. The speaker will present the concept of complementary perception and assistive behaviors as a means for increasing safety and overcoming these barriers. He will discuss how continuum robots and soft implantable devices can be endowed with distal tip force sensing, contact detection, and micron-scale motion resolution that enables in-vivo deployment of cellular scale imaging modalities such as optical coherence tomography. We will focus on our efforts in modeling, designing and controlling intelligent surgical robots capable of sensing the environment and using the sensed information for task execution assistance and for situational awareness augmentation. Sample motivating applications in the areas of minimally invasive surgery of the upper airways, cochlear implant surgery, trans-urethral resection of bladder tumors, and OCT-guided retinal micro-surgery will be used to elucidate the potential of these robots. Finally, recent applications of continuum robots for collaborative manufacturing for safe physical human-robot interaction will be presented as an extension of the theoretical work done initially for surgical applications into a different application domain.

Bio:

Nabil Simaan (Ph.D, 2002 Technion, Israel) is a Professor of Mechanical Engineering, Computer Science and Otolaryngology at Vanderbilt University, Nashville TN. He has served as an Editor for IEEE ICRA, associate editor for IEEE TRO, Associate Editor for ASME JMR, and editorial board member for Robotica and a co-chair of the IEEE Technical Committee on Surgical Robotics. His research interests include parallel robotics, continuum robotics and design of new robotic systems for dexterous and image-guided surgical robotics. His recent works focuses on use of intraoperative sensing for enabling complementary situational awareness in robot-assisted surgery. In 2020 he was elected IEEE Fellow for contributions to dexterous continuum robotics for surgery. In 2021 he was elected ASME Fellow for pioneering contributions to the modeling, design and practice of continuum and soft robots for surgery.

"Rich Babies, Poor Robots: Towards Rich Sensing, Environments and Representations"

Seminar - Thursday, Feb. 24th @ 1-2pm PST 

Location: CSE 1242

Zoom link: https://ucsd.zoom.us/my/nicklashansen 

Speaker:  Abhinav Gupta

In recent years, we have seen a shift in different fields of AI such as computer vision, robotics. From task-driven supervised learning, we are now starting to see shift towards more human like learning. Self-supervised learning, embodied AI, multimodal learning are few subfields which have emerged from this shift. Yet I will argue the shift is half-hearted in nature and there is a huge situational gap between babies (human learners) and current robots. Our babies use five senses, multiple environments and rich forms of supervision. On the other hand, our AI algorithms still primarily use vision (best case), learn for pre-defined tasks and use categories as supervision. In this talk, I will argue how to bridge this gap. First, I will talk about how to bring tactile sensing into mainstream. More specifically, I will introduce our magnetic sensing skin called ReSkin. Next I will talk about how to formulate task-agnostic learning to learn from multiple environments. Finally, I will talk about using human functions as supervision to train representations.

Bio:

Abhinav Gupta is an Associate Professor at the Robotics Institute, Carnegie Mellon University and Research Manager at Facebook AI Research (FAIR). His research focuses on scaling up learning by building self-supervised, lifelong and interactive learning systems. Specifically, he is interested in how self-supervised systems can effectively use data to learn visual representation, common sense and representation for actions in robots. Abhinav is a recipient of several awards including IAPR 2020 JK Aggarwal Prize, PAMI 2016 Young Researcher Award, ONR Young Investigator Award, Sloan Research Fellowship, Okawa Foundation Grant, Bosch Young Faculty Fellowship, YPO Fellowship,  IJCAI Early Career Spotlight, ICRA Best Student Paper award, and the ECCV Best Paper Runner-up Award. His research has also been featured in Newsweek, BBC, Wall Street Journal, Wired and Slashdot.

Host:  Xiaolong Wang

"Building Robot Teammates for Dynamic Human Environments

CRI Seminar - Thursday 17 @ 11AM PST - Zoom Meeting

Join Url:  https://ucsd.zoom.us/j/99055507432

Speaker:  Andrea Thomaz, CEO, Diligent Robotics 

Bio:

Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics and a renowned social robotics expert. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President’s Council of Advisors on Science and Tech, MIT Technology Review on its Next Generation of 35 Innovators Under 35 list, Popular Science on its Brilliant 10 list, TEDx as a featured keynote speaker on social robotics and Texas Monthly on its Most Powerful Texans of 2018 list. Andrea’s robots have been featured in the New York Times and on the covers of MIT Technology Review and Popular Science. Her passion for social robotics began during her work at the MIT Media Lab, where she focused on using AI to develop machines that address everyday human needs. Andrea co-founded Diligent Robotics to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin and is a Robotics Professor at UT Austin and the PI of the Socially Intelligent Machines Lab.

“Charismatic Robots”

CRI Seminar - Thursday, Feb 10th @ 11AM PST - Zoom Meeting

Join URL:  https://ucsd.zoom.us/j/99055507432

Speaker: Heather Knight, OSU

This talk will present work from the CHARISMA Robotics lab at Oregon State University, an acronym for Collaborative Humans and Robotics: Interaction, Sociability, Machine learning and Art, which is directed by Prof. Heather Knight. Inspired by methods and practices from entertainment, this talk will review robot communication, behavior systems, and flexible, iterative approaches to autonomous and human-in-the-loops designs. From robot furniture to robot comedy, from nonverbal expressions to the cultural situation of robots in the workplace or on the sidewalks, CHARISMA contributes to the fields of human-robot interaction and social robotics, regularly deploying robots in naturalistic human settings and entertainment contexts via remote interfaces.

Bio:

Dr. Heather Knight runs the CHARISMA Robotics research group at Oregon State University, which applies methods from entertainment to the development of more effective and charismatic robots. Their research interests include minimal social robots, multi-robot/multi-human social interaction, and entertainment robots. Outside of Oregon State, Knight also runs an annual Robot Film Festival. Past honors include robot comedy on TED.com, a robot flower garden installation at the Smithsonian/Cooper-Hewitt Design Museum, and a British Video Music Award for OK GO's "This Too Shall Pass" music video, featuring a two-floor Rube Goldberg Machine. She has been named to Forbes List's 30 under 30 in Science and AdWeek's top 100 creatives. Prior to her position here, she was a postdoc at Stanford University exploring minimal robots and autonomous car interfaces, conducted a PhD in Robotics at Carnegie Mellon University exploring Expressive Motion for Low Degree of Freedom Robots, and received a M.S. and B.S. in Electrical Engineering & Computer Science at Massachusetts Institute of Technology, where she developed a sensate skin for a robot teddy bear at the MIT Media Lab. Additional past work includes robotics and instrumentation at NASA's Jet Propulsion Laboratory, and sensor design at Aldebaran Robotics.

"Safe Learning in Robotics"

Thursday February 3rd @ 11AM PST - Zoom Meeting

Join URL:  https://ucsd.zoom.us/j/99055507432

Speaker:  Angela Schoellig

To date, robots have been primarily deployed in structured and predictable settings, such as factory floors and warehouses. The next generation of robots -- ranging from self-driving and -flying vehicles to robot assistants - is xpected to operate alongside humans in complex, unknown and changing environments. This challenges current robot algorithms, which are typically based on our prior knowledge about the robot and its environment. While research has shown that robots are able to learn new skills from experience and adapt to unknown situations, these results have been limited to learning single tasks in simulation or structured lab settings. The next challenge is to enable robot learning in real-world application scenarios. This will require versatile, data-efficient and online learning algorithms that guarantee safety. It will also require to answer the fundamental question of how to design learning architectures for interactive agents.

In this talk I will do two things : First, I will give you an overview of our recent survey paper on Safe Learning in Robotics, which you can find here : https://arxiv.org/abs/2108.06266. Second, I will highlight our recent progress in combining learning methods with formal results from control theory. By combining models with data, our algorithms achieve adaptation to changing conditions during long-term operation, data-efficient multi-robot, multi-task transfer learning, and safe reinforcement learning. We demonstrate our algorithms in vision-based off-road driving and drone flight experiments, as well as on mobile manipulators.

Bio:

Angela Schoellig is an Associate Professor at the University of Toronto Institute for Aerospace Studies and a Faculty Member of the Vector Institute. She holds a Canada Research Chair (Tier 2) in Machine Learning for Robotics and Control and a Canada CIFAR Chair in Artificial Intelligence. She is a principal investigator of the NSERC Canadian Robotics Network and the University's Robotics Institute. She conducts research at the intersection of robotics, controls, and machine learning. Her goal is to enhance the performance, safety, and autonomy of robots by enabling them to learn from past experiments and from each other. She is a recipient of the Robotics: Science and Systems Early Career Spotlight Award (2019), a Sloan Research Fellowship (2017), and an Ontario Early Researcher Award (2017). She is one of MIT Technology Review's Innovators Under 35 (2017), a Canada Science Leadership Program Fellow (2014), and one of Robohub's “25 women in robotics you need to know about (2013)”. Her team is the four-time winner of the North-American SAE AutoDrive Challenge. Her PhD at ETH Zurich (2013) was awarded the ETH Medal and the Dimitris N. Chorafas Foundation Award. She holds both an M.Sc. in Engineering Cybernetics from the University of Stuttgart (2008) and an M.Sc. in Engineering Science and Mechanics from the Georgia Institute of Technology (2007).

"Toward the Development of Highly Adaptive Legged Robots"

Thursday, 27 January @ 11AM PST

Join Zoom Meeting: https://ucsd.zoom.us/j/99055507432

Speaker: Quan Nguyen

Deploying legged robots in real-world applications will require fast adaptation to unknown terrain and model uncertainty. Model uncertainty could come from unknown robot dynamics, external disturbances, interaction with other humans or robots, or unknown parameters of contact models or terrain properties. In this talk, I will first present our recent works on adaptive control and adaptive safety-critical control for legged locomotion adapting to substantial model uncertainty. In these results, we focus on the application of legged robots walking rough terrain while carrying a heavy load. I will then talk about our solution on trajectory optimization that allows legged robots to adapt to a wide variety of challenging terrain. This talk will also discuss the combination of control, trajectory optimization and reinforcement learning toward achieving long-term adaptation in both control actions and trajectory planning for legged robots.

Bio:

Quan Nguyen is an Assistant Professor of Aerospace and Mechanical Engineering at the University of Southern California. Prior to joining USC, he was a Postdoctoral Associate in the Biomimetic Robotics Lab at the Massachusetts Institute of Technology (MIT). He received his Ph.D. from Carnegie Mellon University (CMU) in 2017 with the Best Dissertation Award.

His research interests span different control and optimization approaches for highly dynamic robotics including nonlinear control, trajectory optimization, real-time optimization-based control, robust and adaptive control. His work on the bipedal robot ATRIAS walking on stepping stones was featured on the IEEE Spectrum, TechCrunch, TechXplore and Digital Trends. His work on the MIT Cheetah 3 robot leaping on a desk was featured widely in many major media channels, including CNN, BBC, NBC, ABC, etc. Nguyen won the Best Presentation of the Session at the 2016 American Control Conference (ACC) and the Best System Paper Finalist at the 2017 Robotics: Science & Systems conference (RSS).

Lab website: https://sites.usc.edu/quann/

"Deploying autonomous vehicles for micro-mobility on a university campus"

Thursday 13 January @ 11AM PST

Join Zoom Meeting:  https://ucsd.zoom.us/j/99055507432

Speaker: Henrik Christensen

Abstract: Autonomous vehicles are already deployed on the inter-states. Providing robust autonomous systems for urban environments is a different challenge, as the road network is more complex, there are many more types of road-users (cars, bikes, pedestrians, …) and the potential interactions are more complex. In an urban environment it is also harder to use pre-computed HD-maps as the world is much more dynamic. We have studied the design of micro-mobility solutions for the UCSD campus. In this presentation we will discuss an overall systems design, trying to eliminate HD-maps and replace it with course topological maps such as OpenStreet Maps, fusing vision and lidar for semantic mapping /localization, detection and handling other road-users. The system has been deployed for a 6-month period to evaluate robustness across weather, season changes, etc. We will present both underlying methods, and experimental insight.

Bio:

Henrik I Christensen is the Qualcomm Chancellor’s Chair of Robot Systems and the director of robotics at UC San Diego. Dr. Christensen does research on a systems approach to robotics. Solutions need a solid theoretical basis, effective algorithms, a good implementation and be evaluated in realistic scenarios. He has made contributions to distributed systems, SLAM, and systems engineering. Henrik is also the main editor of the US National Robotics Roadmap. He is / has served on a significant number of editorial boards (PAMI, IJRR, JFR, RAS, Aut Sys, …).

"Robots with Physical Intelligence"

Thursday, December 2nd @ 11 AM PST - Zoom

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker: Sangbae Kim

http://meche.mit.edu/people/faculty/SANGBAE@MIT.EDU

 

While industrial robots are effective in repetitive, precise kinematic tasks in factories, the design and control of these robots are not suited for physically interactive performance that humans do easily. These tasks require ‘physical intelligence’ through complex dynamic interactions with environments whereas conventional robots are designed primarily for position control. In order to develop a robot with ‘physical intelligence’, we first need a new type of machines that allow dynamic interactions. This talk will discuss how the new design paradigm allows dynamic interactive tasks. As an embodiment of such a robot design paradigm, the latest version of the MIT Cheetah robots and force-feedback teleoperation arms will be presented. These robots are equipped with proprioceptive actuators, a new design paradigm for dynamic robots. This new class of actuators will play a crucial role in developing ‘physical intelligence’ and future robot applications such as elderly care, home service, delivery, and services in environments unfavorable for humans.

Bio:

Sangbae Kim is the director of the Biomimetic Robotics Laboratory and a professor of Mechanical Engineering at MIT. His research focuses on bio-inspired robot design achieved by extracting principles from animals. Kim’s achievements include creating the world’s first directional adhesive inspired by gecko lizards and a climbing robot named Stickybot that utilizes the directional adhesive to climb smooth surfaces. TIME Magazine named Stickybot one of the best inventions of 2006. One of Kim’s recent achievements is the development of the MIT Cheetah, a robot capable of stable running outdoors up to 13 mph and autonomous jumping over obstacles at the efficiency of animals. Kim is a recipient of best paper awards from the ICRA (2007), King-Sun Fu Memorial TRO (2008) and IEEE/ASME TMECH (2016). Additionally, he received a DARPA YFA (2013), an NSF CAREER award (2014), and a Ruth and Joel Spira Award for Distinguished Teaching (2015).

"Toward Object Manipulation Without Explicit Models"

Thursday 18 November @ 11AM PST (virtual)

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Dieter Fox 

 

The prevalent approach to object manipulation is based on the availability of explicit 3D object models. By estimating the pose of such object models in a scene, a robot can readily reason about how to pick up an object, place it in a stable position, or avoid collisions. Unfortunately, assuming the availability of object models constrains the settings in which a robot can operate, and noise in estimating a model's pose can result in brittle manipulation performance. In this talk, I will discuss our work on learning to manipulate unknown objects directly from visual (depth) data. Without any explicit 3D object models, these approaches are able to segment unknown object instances, pickup objects in cluttered scenes, and re-arrange them into desired configurations. I will also present recent work on combining pre-trained language and vision models to efficiently teach a robot to perform a variety of manipulation tasks.  I'll conclude with our initial work toward learning implicit representations for objects.

Bio:

Dieter Fox is Senior Director of Robotics Research at NVIDIA and Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany.  His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE, AAAI, and ACM, and recipient of the 2020 Pioneer in Robotics and Automation Award.  Dieter also received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.

"From Bio-inspiration to Robotic Applications"

Thursday 4 November @ 11AM PST (Virtual Seminar)

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Howie Choset

                      Carnegie Mellon University

                      http://biorobotics.org

 

The animal kingdom is full of both human and non-human animals worthy of investigation, emulation and re-creation. As such, my research group has created a comprehensive research program focusing on biologically-inspired robots, and has applied them to search and rescue, minimally invasive surgery, manufacturing, and recycling. These robots inspire great scientific challenges in mechanism design, control, planning and estimation theory. These research topics are important because once the robot is built (design), it must decide where to go (path planning), determine how to get there (control), and use feedback to close the loop (estimation). A common theme to these research foci is devising ways by which we can reduce multi-dimensional problems to low dimensional ones for planning, analysis, and optimization. In this talk, I will discuss our results in geometric mechanics, Bayesian filtering, scalable multi-agent planning, and application and extension of modern machine learning techniques to support these reductions. This talk will also cover how my students and I commercialized these technologies by founding three companies: Medrobotics, Hebi Robotics, and Bito Robotics. If time permits, I will also discuss my educational activities, especially at the undergraduate level, with a course using LEGO robots, and the role of entrepreneurism in University education.

Bio:

Howie Choset is a Professor of Robotics at Carnegie Mellon University where he serves as the co-director of the Biorobotics Lab and as director of the Robotics Major. He received his undergraduate degrees in Computer Science and Business from the University of Pennsylvania in 1990. Choset received his Master’s and PhD from Caltech in 1991 and 1996. Choset's research group reduces complicated high-dimensional problems found in robotics to low-dimensional simpler ones for design, analysis, and planning. Motivated by applications in confined spaces, Choset has created a comprehensive program in modular, high DOF, and multi- robot systems, which has led to basic research in mechanism design, path planning, motion planning, and estimation. In addition to publications, this work has led to Choset, along with his students, to form several companies including Medrobotics, for surgical systems, Hebi Robotics, for modular robots, and Bito Robotics for autonomous guided vehicles. Recently, Choset’s surgical snake robot cleared the FDA and has been in use in the US and Europe since. Choset also leads multi-PI projects centered on manufacturing: (1) automating the programming of robots for auto-body painting; (2) the development of mobile manipulators for agile and flexible fixture-free manufacturing of large structures in aerospace, and (3) the creation of a data-robot ecosystem for rapid manufacturing in the commercial electronics industry. Choset co-led the formation of the Advanced Robotics for Manufacturing Institute, which is $250MM national institute advancing both technology development and education for robotics in manufacturing. Finally, Choset is a founding Editor of the journal Science Robotics. and is currently serving on the editorial board of IJRR.

"3D Perception for Autonomous Systems"

Thursday 28 October @ 11AM PST

Speaker: Camillo Jose (CJ) Taylor
Raymond S. Markowitz President's Distinguished Professor
Computer and Information Science Dept and GRASP Laboratory
University of Pennsylvania  

 

Building a 3D representation of the environment has been a critical issue for researchers working on mobile robotics. It is typically an essential component of systems that must navigate and act in the world. In this talk I will describe some of the algorithms we have developed to address this problem in a variety of contexts including: building parsimonious models of indoor spaces, using deep learning to construct low dimensional models of scene structure, and our recent work on building robust real-time 3D SLAM systems to make sense of LIDAR data.

Bio: Dr. Taylor received his A.B. degree in Electrical Computer and Systems Engineering from Harvard College in 1988 and his M.S. and Ph.D. degrees from Yale University in 1990 and 1994 respectively. Dr. Taylor was the Jamaica Scholar in 1984, a member of the Harvard chapter of Phi Beta Kappa and held a Harvard College Scholarship from 1986-1988. From 1994 to 1997 Dr. Taylor was a postdoctoral researcher and lecturer with the Department of Electrical Engineering and Computer Science at the University of California, Berkeley. He joined the faculty of the Computer and Information Science Department at the University of Pennsylvania in September 1997. He received an NSF CAREER award in 1998 and the Lindback Minority Junior Faculty Award in 2001. In 2012 he received a best paper award at the IEEE Workshop on the Applications of Computer Vision. Dr Taylor's research interests lie primarily in the fields of Computer Vision and Robotics and include: reconstruction of 3D models from images, vision-guided robot navigation and scene understanding. Dr. Taylor has served as an Associate Editor of the IEEE Transactions of Pattern Analysis and Machine Intelligence. He has also served on numerous conference organizing committees he is a General Chair of the International Conference on Computer Vision (ICCV) 2021 and was a Program Chair of the 2006 and 2017 editions of the IEEE Conference on Computer Vision and Pattern Recognition and of the 2013 edition of 3DV. In 2012 he was awarded the Christian R. and Mary F. Lindback Foundation Award for Distinguished Teaching at the University of Pennsylvania.

Webpagehttps://www.cis.upenn.edu/~cjtaylor/

"Reinventing Human-Robot Interaction for Companion Robots"

Thursday 21 October @ 11AM PST (Virtual Seminar)

 Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Paolo Pirjanian, Ph.D., Founder/CEO of Embodied and former CTO of iRobot

 

Previous solutions to HRI have taken a piecemeal approach to building a social interface and have failed. Many solutions merely copy the command-response conversation pattern, e.g., from Alexa, onto a robot which can be awkward and unnatural. Most social robots attempt to add human qualities but fall short in true social interaction: simply adding “eyes” that do not look at you is uncanny. Using a touch screen “face” as a means of input is a step backwards and poking a character’s “face” encourages inappropriate social behavior. Having a faux body but no means to express body language seems lifeless and lacks embodiment. These piecemeal solutions miss the point. Social interaction does not require perfect anthropomorphic form-factor but it does need a minimum set of affordances to have successful and believable agency, something that we can learn from Pixar, Disney and the like.

At Embodied we have been rethinking and reinventing how human-machine interaction is done - where a user can have a fluid natural conversation with a robot; and where the robot can discern who to address and how to proactively engage and use subtle eye gaze, facial expressions, and body language as part of its response.

In this presentation, Paolo Pirjanian will discuss Embodied’s solution and our first product, Moxie, targeting children as a solution to promote social, emotional and cognitive skills.

Bio:

Paolo Pirjanian received his M.Sc and Ph.D. in CSE from Aalborg University. He was a Post-Doc at USC and then at JPL are a research scientist. From JPL he came to Evolution Robotics and was the CTO and Later the CEO before it was acquired by iRobot. He has served as at the CTO of iRobot and is the founder and CEO of Embodied. He is a NASA scientist turned robotics entrepreneur who has helped create technologies for many products ranging from the Sony AIBO to the iRobot Roomba and most recently Moxie.

"Enabling Grounded Language Communication for Human-Robot Teaming "

Thursday 14 October @ 11AM (Virtual)

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Thomas Howard

 

The ability for robots to effectively understand natural language instructions and convey information about their observations and interactions with the physical world is highly dependent on the sophistication and fidelity of the robot’s representations of language, environment, and actions.  As we progress towards more intelligent systems that perform a wider range of tasks in a greater variety of domains, we need models that can adapt their representations of language and environment to achieve the real-time performance necessitated by the cadence of human-robot interaction within the computational resource constraints of the platform.  In this talk I will review my laboratory’s research on algorithms and models for robot planning, mapping, control, and interaction with a specific focus on language-guided adaptive perception and bi-directional communication with deliberative interactive estimation.    

Bio

Thomas Howard is an assistant professor in the Department of Electrical and Computer Engineering at the University of Rochester.  He also holds secondary appointments in the Department of Biomedical Engineering and Department of Computer Science, is an affiliate of the Goergen Institute of Data Science and directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory. Previously he held appointments as a research scientist and a postdoctoral associate at MIT's Computer Science and Artificial Intelligence Laboratory in the Robust Robotics Group, a research technologist at the Jet Propulsion Laboratory in the Robotic Software Systems Group, and a lecturer in mechanical engineering at Caltech. 

Howard earned a PhD in robotics from the Robotics Institute at Carnegie Mellon University in 2009 in addition to BS degrees in electrical and computer engineering and mechanical engineering from the University of Rochester in 2004. His research interests span artificial intelligence, robotics, and human-robot interaction with a research focus on improving the optimality, efficiency, and fidelity of models for decision making in complex and unstructured environments with applications to robot motion planning, natural language understanding, and human-robot teaming.  Howard was a member of the flight software team for the Mars Science Laboratory, the motion planning lead for the JPL/Caltech DARPA Autonomous Robotic Manipulation team, and a member of Tartan Racing, winner of the 2007 DARPA Urban Challenge.  Howard has earned Best Paper Awards at RSS (2016) and IEEE SMC (2017), two NASA Group Achievement Awards (2012, 2014), was a finalist for the ICRA Best Manipulation Paper Award (2012) and was selected for the NASA Early Career Faculty Award (2019).  Howard’s research at the University of Rochester has been supported by National Science Foundation, Army Research Office, Army Research Laboratory, Department of Defense Congressionally Directed Medical Research Program, National Aeronautics and Space Administration, and the New York State Center of Excellence in Data Science. 

Faculty Host: Nikolay Atanasov - ECE

"Beyond the Label: Robots that Reason about Object Semantics"

Thursday 7th October @ 11AM PST

Zoom:  https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Sonia Chernova

 

Reliable operation in everyday human environments – homes, offices, and businesses – remains elusive for today’s robotic systems. A key challenge is diversity, as no two homes or businesses are exactly alike. However, despite the innumerable unique aspects of any home, there are many commonalities as well, particularly about how objects are placed and used. These commonalities can be captured in semantic representations, and then used to improve the autonomy of robotic systems by, for example, enabling robots to infer missing information in human instructions, efficiently search for objects, or manipulate objects more effectively. In this talk, I will discuss recent advances in semantic reasoning, particularly focusing on semantics of everyday objects, household environments, and the development of robotic systems that intelligently interact with their world.

Bio:

Sonia Chernova is an Associate Professor in the College of Computing at Georgia Tech. She directs the Robot Autonomy and Interactive Learning lab, where her research focuses on the development of intelligent and interactive autonomous systems. Chernova’s contributions span robotics and artificial intelligence, including semantic reasoning, adaptive autonomy, human-robot interaction, and explainable AI. She has authored over 100 scientific papers and is the recipient of the NSF CAREER, ONR Young Investigator, and NASA Early Career Faculty awards. She also leads the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING).

"Learning Where to Trust Unreliable Models for Deformable Object Manipulation"

Thursday, Sept. 30 @ 11AM PST 

Zoom: https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Dmitry Berenson

 

The world outside our labs seldom conforms to the assumptions of our models. This is especially true for dynamics models used in control and motion planning for complex high-DOF systems like deformable objects. We must develop better models, but we must also accept that, no matter how powerful our simulators or how big our datasets, our models will sometimes be wrong. This talk will present our recent work on using unreliable dynamics models for the manipulation of deformable objects, such as rope. Given a dynamics model, our methods learn where that model can be trusted given either batch data or online experience. These approaches allow dynamics models to generalize to control and planning tasks in novel scenarios, while requiring much less data than baseline methods. This data-efficiency is a key requirement for scalable and flexible manipulation capabilities.

Bio: Dmitry Berenson is an Associate Professor in Electrical Engineering and Computer Science and the Robotics Institute at the University of Michigan, where he has been since 2016. Before coming to University of Michigan, he was an Assistant Professor at WPI (2012-2016). He received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He was also a post-doc at UC Berkeley (2011-2012). He has received the IEEE RAS Early Career Award and the NSF CAREER award. His current research focuses on robotic manipulation, robot learning, and motion planning.

"CRI Seminars Speaker Fall 2021"

We will have presentations from:  

Sep 23      H.I. Christensen, UCSD                 - IROS Papers

Sep 30      Dmitry Berenson, UMICH          - Collaborative Robots

Oct 7         Sonia Chernova, GT                        - Human Collaborative Systems

Oct 14      Tom Howard, UR                              - Enabling Grounded Language Models

Oct 21      Paolo PIrjanian                                  - Embodied Robotics          

Oct 28      CJ Taylor, UPENN                            - Perceptual Robotics

Nov 4        Howie Choset, CMU                      - Robot Mechanisms

Nov 18     Dieter Fox, UW                                 - Robots & ML

Dec 2        Sangbae Kim, MIT                            - Robot Mobility

The seminars are every Thursday @ 11am.  

Zoom info:  https://ucsd.zoom.us/j/94406976474 

 

"Observing, Learning, and Executing Fine-Grained Manipulation Activities: A Systems Perspective"

Thursday, May 27 @ 12:00ppm PST
 

Zoomhttps://ucsd.zoom.us/j/91267376688
Speaker: Gregory D. Hager

In the domain of image and video analysis, much of the deep learning revolution has been focused on narrow, high-level classification tasks that are defined through carefully curated, retrospective data sets. However, most real-world applications – particularly those involving complex, multi-step manipulation activities -- occur “in the wild" where there is a combinatorial “long tail” of unique situations that are never seen during training. These systems demand a richer, fine-grained task representation that is informed by the application context, and which supports quantitative analysis and compositional synthesis. As a result, the challenges inherent in both high-accuracy, fine-grained analysis and performance of perception-based activities are manifold, spanning representation, recognition, and task and motion planning.

In this talk, I’ll summarize our work addressing these challenges. I’ll first describe DASZL, our approach to interpretable, attribute-based activity detection. DASZL operates in both pre-trained and zero shot settings and has been applied to a variety of applications ranging from surveillance to surgery. I’ll then describe work on machine learning approaches for systems that use prediction models to support perception-based planning and execution of manipulation tasks. I’ll close with some recent work on “Good Robot,” a method for end-to-end training of a robot manipulation system which leverages architecture search and fine-grained task rewards to achieve state-of-the-art performance in complex, multi-step manipulation tasks. I’ll close with a brief summary of some directions we are exploring, enabled by these technologies. 

Bio: Greg Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University and Founding Director of the Malone Center for Engineering in Healthcare. Professor Hager’s research interests include computer vision, vision-based and collaborative robotics, time-series analysis of image data, and applications of image analysis and robotics in medicine and in manufacturing. He is a member of the CISE Advisory Committee, the Board of the Directors of the Computing Research Association, and the governing board of the International Federation of Robotics Research. He previously served as Chair of the Computing Community Consortium. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University and in 2017 was named a TUM Ambassador.  Professor Hager has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV and ACM Transactions on Computing for Healthcare. He is a fellow of the ACM and IEEE for his contributions to Vision-Based Robotics and a Fellow of AAAS, the MICCAI Society and of AIMBE for his contributions to imaging and his work on the analysis of surgical technical skill. Professor Hager is a co-founder of Clear Guide Medical and Ready Robotics. He is currently an Amazon Scholar.

"Semantic Robot Programming... and Maybe Making the World a Better Place"

May 20, 2021 @ 12:00pm PST

Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Odest Chadwicke Jenkins, Ph.D.

The visions of interconnected heterogeneous autonomous robots in widespread use are a coming reality that will reshape our world. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret user instructions that accord with that user’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit a critical missing component is the grounding of semantic symbols in a manner that addresses both uncertainty in low-level robot perception and intentionality in high-level reasoning. Such a grounding will enable robots to fluidly work with human collaborators to perform tasks that require extended goal-directed autonomy.

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

Bio: Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science and the Association for the Advancement of Artificial Intelligence, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

"Designing the Future of Human-Robot Interaction"

May 13 @ 12pm (PST) 
 

Location: CSE 1202
Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Dr. Holly Yanco

Abstract: Robots navigating in difficult and dynamic environments often need assistance from human operators or supervisors, either in the form of teleoperation or interventions when the robot's autonomy is not able to handle the current situation. Even in more controlled environments, such as office buildings and manufacturing floors, robots may need help from people. This talk will discuss methods for controlling both individual robots and groups of robots, in applications ranging from assistive technology to telepresence to exoskeletons.  A variety of modalities for human-robot interaction with robot systems, including multi-touch and virtual reality, will be presented.

Bio: Dr. Holly Yanco is a Distinguished University Professor, Professor of Computer Science, and Director of the New England Robotics Validation and Experimentation (NERVE) Center at the University of Massachusetts Lowell. Her research interests include human-robot interaction, evaluation metrics and methods for robot systems, and the use of robots in K-12 education to broaden participation in computer science. Yanco's research has been funded by NSF, including a CAREER Award, the Advanced Robotics for Manufacturing (ARM) Institute, ARO, DARPA, DOE-EM, ONR, NASA, NIST, Google, Microsoft, and Verizon. Yanco has a PhD and MS in Computer Science from the Massachusetts Institute of Technology and a BA in Computer Science and Philosophy from Wellesley College. She is a AAAI Fellow.

"Autonomous Mobile Robot Challenges and Opportunities in Domain and Mission Complex Environments"

Thursday, 6 May @ 12pm PST (in person CSE 1202)

Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Bruce Morris

Increased autonomy can have many advantages, including increased safety and reliability, improved reaction time and performance, reduced personnel burden with associated cost savings, and the ability to continue operations in communications-degraded or denied environments. Artificial Intelligence for Small Unit Maneuver (AISUM) envisions a way for future expeditionary tactical maneuver elements to team with intelligent adaptive systems. Accordingly, it opens analyses of how such teaming will greatly enhance mission precision, speed, and mass in complex, contested, and congested environments. The ultimate aim in this effort is to reduce risk to missions and our own force, partners, and civilians. AISUM provides competitive advantage in the changing physics of competition space via robotic autonomous systems in dense urban clutter (interior, exterior, subterranean) and dynamic, spectrum-denied areas, with no prior knowledge of the environment in support of human-machine teams for tactical maneuver and swarming tactics.

Bio: Bruce Morris currently serves as the Deputy Director of Future Concepts and Innovation at Naval Special Warfare Command. In this role, he is responsible for leading the SEAL Teams in their strategic and practical application of digital modernization to enable Artificial Intelligence and Autonomous Mobile Robotics in Human-Machine Teaming for asymmetric competitive advantage. The Future Concepts and Innovation Team was founded in 2016, under Dr. Morris’ strategic guidance, to explore and exploit innovation methodology, venture capital, and emerging technologies for U.S. Special Operations and the greater National Security ecosystem.

A native of San Diego and a third-generation Naval Officer, Bruce has dedicated his life in the service our great nation and to the community of San Diego. Bruce received his commission from the United States Naval Academy, graduating in 1988 with a degree in Mathematics. Immediately following graduation, he then reported to his hometown of Coronado, CA for Basic Underwater Demolition/SEAL Training with class 158. Bruce’s service as a Navy SEAL Officer, Information Warfare Officer and government civilian span over 30 years and represents a cross section of leadership roles and the consistent innovative applications of science and technology on the battlefield and in garrison. Additionally, he holds an MS in Meteorology and Physical Oceanography and a PhD in Physical Oceanography from the Naval Postgraduate School with a focus on Numerical Modelling and Ocean Dynamics.

Bruce is the recipient of the CAPT Harry T. Jenkins Memorial Award for Community Service. His in-service professional awards include the Bronze Star Medal with Combat Distinguishing Device, Meritorious Service Medal, Combat Action Ribbon, other personal and unit awards to include the Navy Meritorious Civilian Service Award.

"ICRA 2021 Overview"

Thursday 29 April @ 12pm PST

Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Henrik Christensen

Looking Farther in Parametric Scene Parsing with Ground and Aerial Imagery

Raghava Modhugu, Harish Rithish Sethuram, Manmohan Chandraker, C.V. Jawahar

http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2021/Scene_attributes___ICRA_2021.pdf

 

Auto-calibration Method Using Stop Signs for Urban Autonomous Driving Applications

Yunhai Han, Yuhan Liu, David Paz, Henrik Christensen
https://arxiv.org/abs/2010.07441

 

Social Navigation for Mobile Robots in the Emergency Department

Angelique Taylor, Sachiko Mastumoto, Wesley Xiao, and Laurel D. Riek
http://cseweb.ucsd.edu/~lriek/papers/taylor-icra-2021.pdf

 

Temporal Anticipation and Adaptation Methods for Fluent Human-Robot Teaming

Tariq Iqbal and Laurel D. Riek
http://cseweb.ucsd.edu/~lriek/papers/iqbal-riek-icra-2021.pdf

Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning
Q. Feng, N. Atanasov
https://arxiv.org/abs/2101.01844

Coding for Distributed Multi-Agent Reinforcement Learning
B. Wang, J. Xie, N. Atanasov
https://arxiv.org/abs/2101.02308

Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams
X. Cai, B. Schlotfeldt, K. Khosoussi, N. Atanasov, G. J. Pappas, J. How
https://arxiv.org/abs/2101.11093

Active Bayesian Multi-class Mapping from Range and Semantic Segmentation Observations
A. Asgharivaskasi, N. Atanasov
https://arxiv.org/abs/2101.01831

Learning Barrier Functions with Memory for Robust Safe Navigation
K. Long, C. Qian, J. Cortes, N. Atanasov
https://arxiv.org/abs/2011.01899

Generalization in reinforcement learning by soft data augmentation
Nicklas Hansen, Xiaolong Wang
https://nicklashansen.github.io/SODA/

Bimanual Regrasping for Suture Needles using Reinforcement Learning for Rapid Motion Planning
Z.Y. Chiu, F. Richter, E.K. Funk, R.K. Orosco, M.C. Yip
https://arxiv.org/pdf/2011.04813.pdf

Real-to-Sim Registration of Deformable Soft-Tissue with Position-Based Dynamics for Surgical Robot Autonomy
F. Liu, Z. Li, Y. Han, J. Lu, F. Richter, M.C. Yip
https://arxiv.org/abs/2011.00800

Model-Predictive Control of Blood Suction for Surgical Hemostasis using Differentiable Fluid Simulations
J. Huang*, F. Liu*, F. Richter, M.C. Yip
https://arxiv.org/abs/2102.01436

SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction
J. Lu, A. Jayakumari, F. Richter, Y. Li, M.C. Yip
https://arxiv.org/pdf/2003.03472.pdf

Data-driven Actuator Selection for Artificial Muscle-Powered Robots
T. Henderson, Y. Zhi, A. Liu, M.C. Yip
https://arxiv.org/abs/2104.07168

Optimal Multi-Manipulator Arm Placement for Maximal Dexterity during Robotics Surgery
J. Di, M. Xu, N. Das, M.C. Yip
https://arxiv.org/abs/2104.06348

MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints
L. Li, Y.L. Miao, A.H. Qureshi, M.C. Yip
https://arxiv.org/pdf/2101.06798.pdf

Autonomous Robotic Suction to Clear the Surgical Field for Hemostasis using Image-based Blood Flow Detection
F. Richter, S. Shen, F. Liu, J. Huang, E.K. Funk, R.K. Orosco, M.C. Yip
https://arxiv.org/abs/2010.08441

Scalable Learning of Safety Guarantees for Autonomous Systems using Hamilton-Jacobi Reachability
Sylvia Herbert, Jason J. Choi, Suvansh Qazi, Marsalis Gibson, Koushil Sreenath, Claire J. Tomlin
https://arxiv.org/abs/2101.05916

Planning under non-rational perception of uncertain spatial costs
Aamodh Suresh and Sonia Martinez
https://arxiv.org/pdf/1904.02851.pdf

"Inductive Biases for Robot Learning"

Thursday 22 April @ 12pm PST 

Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Michael Lutter

Abstract: The recent advances in robot learning have been largely fueled by model-free deep reinforcement learning algorithms. These black-box methods utilize large datasets and deep networks to discover good behaviors. The existing knowledge of robotics and control is ignored and only the information contained within the data is leveraged. In this talk we want to take a different approach and evaluate the combination of knowledge and data-driven learning. We show that this combination enables sample-efficient learning for physical robots and that generic knowledge from physics and control can be incorporated in deep network representations. The use of inductive biases for robot learning yields robots that learn dynamic tasks within minutes and robust control policies for under-actuated systems.

Bio: Michael Lutter joined the Institute for Intelligent Autonomous Systems (IAS) at TU Darmstadt in July 2017. Prior to this Michael held a researcher position at the Technical University of Munich (TUM) for bio-inspired learning for robotics. During this time he worked on the Human Brain Project, a European H2020 FET flagship project. In addition to his studies, Michael worked for ThyssenKrupp, Siemens and General Electric and received multiple scholarships for academic excellence and his current research.

"Patient-Specific Continuum Robotic Systems for Surgical Interventions"

Thursday 15 April @ 12pm PST (also CSE290)
Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Jaydev P. Desai
Georgia Institute of Technology

Abstract:
Over the past few decades, robotic systems for surgical interventions have undergone tremendous transformation. The goal of a surgical intervention is to try to do it as minimally invasively as possible, since that significantly reduces post-operative morbidity, reduces recovery time, and also leads to lower healthcare costs. However, minimally invasive surgical interventions for a range of procedures will require a significant change in the healthcare paradigm for both diagnostic and therapeutic interventions. Advances in surgical interventions will benefit from “patient-specific robotic tools” to deliver optimal diagnosis and therapy. Hence, this talk will focus on the development of continuum, flexible, and patient-specific robotic systems for surgical interventions. Since, these robotic systems could operate in an imaging environment, we will also address challenges in image-guided interventions. This talk will present examples from neurosurgery and endovascular interventions to highlight the applicability of patient-specific robotic systems for surgery.

Biography:
Dr. Jaydev P. Desai is currently a Professor at Georgia Tech in the Wallace H. Coulter Department of Biomedical Engineering. He is the founding Director of the Georgia Center for Medical Robotics (GCMR) and an Associate Director of the Institute for Robotics and Intelligent Machines (IRIM). He completed his undergraduate studies from the Indian Institute of Technology, Bombay, India, in 1993. He received his M.A. in Mathematics in 1997 and M.S. and Ph.D. in Mechanical Engineering and Applied Mechanics in 1995 and 1998 respectively, all from the University of Pennsylvania. He was also a Post-Doctoral Fellow in the Division of Engineering and Applied Sciences at Harvard University. He is a recipient of several NIH R01 grants, NSF CAREER award, and was also the lead inventor on the “Outstanding Invention in the Physical Science Category” at the University of Maryland, College Park, where he was formerly employed. He is also the recipient of the Ralph R. Teetor Educational Award and the IEEE Robotics and Automation Society Distinguished Service Award. He has been an invited speaker at the National Academy of Sciences “Distinctive Voices” seminar series and also invited to attend the National Academy of Engineering’s U.S. Frontiers of Engineering Symposium. He has over 190 publications, is the founding Editor-in-Chief of the Journal of Medical Robotics Research, and Editor-in-Chief of the four-volume Encyclopedia of Medical Robotics. His research interests are primarily in the areas of image-guided surgical robotics, pediatric robotics, endovascular robotics, and rehabilitation and assistive robotics. He is a Fellow of IEEE, ASME, and AIMBE.

Director – Georgia Center for Medical Robotics (GCMR)
Associate Director - Medical Robotics and Human Augmentation, Institute for Robotics and Intelligent Machines (IRIM)
Wallace H. Coulter Department of Biomedical Engineering
Georgia Institute of Technolog

"Visual Representations for Navigation and Object Detection"

Zoom Link: [https://ucsd.zoom.us/j/91267376688]
In-Person: Room 1202, CSE Building

Speaker: Jana Kosecka
George Mason University
cs.gmu.edu/~kosecka

 

Abstract: Advancements in reliable navigation and mapping rest to a large extent on robust, efficient and scalable understanding of the surrounding environment. The success in recent years have been propelled by the use machine learning techniques for capturing geometry and semantics of environment from video and range sensors. I will discuss approaches to object detection, pose recovery, 3D reconstruction and detailed semantic parsing using deep convolutional neural networks (CNNs).
While data-driven deep learning approaches fueled rapid progress in object category recognition by exploiting large amounts of labelled data, extending this learning paradigm to previously unseen objects comes with challenges. I will discuss the role of active self-supervision provided by ego-motion for learning object detectors from unlabelled data. These powerful spatial and semantic representations can then be jointly optimized with policies for elementary navigation tasks. The presented explorations open interesting avenues for control of embodied physical agents and general strategies for design and development of general purpose autonomous systems.

Bio:  Jana Kosecka is Professor at the Department of Computer Science, George Mason University. She obtained Ph.D. in Computer Science from University of Pennsylvania. Following her PhD, she was a postdoctoral fellow at the EECS Department at University of California, Berkeley. She is the recipient of David Marr's prize  and received the National Science Foundation CAREER Award. Jana is a chair of IEEE technical Committee of Robot Perception, Associate Editor of IEEE Robotics and Automation Letters and International Journal of Computer Vision, former editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She held visiting positions at Stanford University, Google and Nokia Research. She  is a co-author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested 'seeing' systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.

"Design of Autonomous Vehicles @ UCSD"

Zoom Link: [https://ucsd.zoom.us/j/91267376688]

Speaker: Henrik I Christensen

 

Over the last couple of years the Autonomous Vehicle Laboratory has designed modules for autonomously driving micro-mobility vehicles such as campus mail delivery. The design includes new sensors setups, methods for local scale mapping / localization, semantic modeling of the world to allow for contextual navigation, detection and tracking of other road users, and use of simulation for verification of design decisions. The system has been tested over a six month period. In this presentation we do a review of our motivation, approach, methods and results this far. The methods developed have been tested through extensive evaluation and published at ICRA, IROS, ISER, ...

"Safe Real-World Autonomy in Uncertain and Unstructured Environments"

Zoom URL: https://ucsd.zoom.us/j/97197176606

Sylvia Herbert - UCSD

 

In this talk I will present my current and future work towards enabling safe real-world autonomy. My core focus is to enable efficient and safe decision-making in complex autonomous systems, while reasoning about uncertainty in real-world environments, including those involving human interactions. These methods draw from control theory, cognitive science, and reinforcement learning, and are backed by both rigorous theory and physical testing on robotic platforms.

First I will discuss safety for complex systems in simple environments. Traditional methods for generating safety analyses and safe controllers struggle to handle realistic complex models of autonomous systems, and therefore are stuck with simplistic models that are less accurate. I have developed scalable techniques for theoretically sound safety guarantees that can reduce computation by orders of magnitude for high-dimensional systems, resulting in better safety analyses and paving the way for safety in real-world autonomy.

Next I will add in complex environments. Safety analyses depend on pre-defined assumptions that will often be wrong in practice, as real-world systems will inevitably encounter incomplete knowledge of the environment and other agents. Reasoning efficiently and safely in unstructured environments is an area where humans excel compared to current autonomous systems. Inspired by this, I have used models of human decision-making from cognitive science to develop algorithms that allow autonomous systems to navigate quickly and safely, adapt to new information, and reason over the uncertainty inherent in predicting humans and other agents. Combining these techniques brings us closer to the goal of safe real-world autonomy.

Bio:

Sylvia Herbert is an Assistant Professor in Mechanical and Aerospace Engineering at the University of California San Diego. Prior to joining UCSD, she received her PhD in Electrical Engineering from UC Berkeley, where she studied with Professor Claire Tomlin on safe and efficient control of autonomous systems. Before that she earned her BS/MS at Drexel University in Mechanical Engineering. She is the recipient of the UC Berkeley Chancellor’s Fellowship, NSF GRFP, UC Berkeley Outstanding Graduate Student Instructor Award, and the Berkeley EECS Demetri Angelakos Memorial Achievement Award for Altruism.

"Incorporating Structure in Deep Dynamics Model for Improved Generalization"

Zoom - https://ucsd.zoom.us/j/97197176606

Rose Yu - UCSD

 

Abstract: Recent work has shown deep learning can significantly improve the prediction of dynamical systems. However, an inability to generalize under distributional shift limits its applicability to the real world. In this talk, I will demonstrate how to principally incorporate relation and symmetry structure in deep learning models to improve generalization. I will showcase their applications to robotic manipulation and vehicle trajectory prediction tasks.

Bio: Rose Yu is an assistant professor at UCSD in CSE and a primary faculty in the AI Group. She is also affiliated with HDSI, CRI and MICS. She does research on ML with an emphasis on large-scale spatiotemporal data. Her work has been used across a variety of use-cases such as dynamic systems, healthcare and physical sciences. Dr. Yu received her PhD from USC and was a postdoc at CalTech. She has already won numerous awards.

Reference:
[1] Deep Imitation Learning for Bimanual Robotic Manipulation
Fan Xie, Alex Chowdhury, Clara De Paolis, Linfeng Zhao, Lawson Wong, Rose Yu
Advances in Neural Information Processing Systems (NeurIPS), 2020
[2] Trajectory Prediction using Equivariant Continuous Convolution
Robin Walters, Jinxi (Leo) Li, Rose Yu
International Conference on Learning Representations (ICLR), 2021

"Abstractions in Robot Planning"

https://ucsd.zoom.us/j/97197176606

Neil T. Dantam, Colorado School of Mines

 

Abstract: Complex robot tasks require a combination of abstractions and algorithms: geometric models for motion planning, probabilistic models for perception, discrete models for high-level reasoning. Each abstraction imposes certain requirements, which may not always hold. Robust planning systems must therefore resolve errors in abstraction. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In recent work, we present an initial approach to relax the completeness assumptions in motion planning.

Bio: Neil T. Dantam is an Assistant Professor of Computer Science at the Colorado School of Mines. His research focuses on robot planning and manipulation, covering task and motion planning, quaternion kinematics, discrete policies, and real-time software design.

Previously, Neil was a Postdoctoral Research Associate in Computer Science at Rice University working with Prof. Lydia Kavraki and Prof. Swarat Chaudhuri. Neil received a Ph.D. in Robotics from Georgia Tech, advised by Prof. Mike Stilman, and B.S. degrees in Computer Science and Mechanical Engineering from Purdue University. He has worked at iRobot Research, MIT Lincoln Laboratory, and Raytheon. Neil received the Georgia Tech President's Fellowship, the Georgia Tech/SAIC paper award, an American Control Conference '12 presentation award, and was a Best Paper and Mike Stilman Award finalist at HUMANOIDS '14.

"Probabilistic Robotics and Autonomous Driving"

 https://ucsd.zoom.us/j/97197176606

Wolfram Burgard, Toyota Research Institute

 

Abstract: For autonomous robots and automated driving, the capability to robustly perceive their environments and execute their actions is the ultimate goal. The key challenge is that no sensors and actuators are perfect, which means that robots and cars need the ability to properly deal with the resulting uncertainty. In this presentation, I will introduce the probabilistic approach to robotics, which provides a rigorous statistical methodology to solve the state estimation problem. I will furthermore discuss how this approach can be extended using state-of-the-art technology from machine learning to bring us closer to the development of truly robust systems able to serve us in our every-day lives. In this context, I will in particular focus on the data advantage that the Toyota Research Institute is planning to leverage in combination with self-/semi-supervised methods for machine learning to speed up the process of developing self-driving cars.

Bio: Wolfram Burgard received the Ph.D. degree in computer science from University of Bonn, Bonn, Germany, in 1991.He is currently VP of Autonomous Driving at Toyota Research Institute and a Professor of computer science with University of Freiburg, Freiburg, Germany, where he heads the Laboratory for Autonomous Intelligent Systems. In the past, he developed several innovative probabilistic techniques for robot navigation and control, which cover different aspects such as localization, map building, path planning, and exploration. His research interests include artificial intelligence and mobile robots. Dr. Burgard received several Best Paper Awards from outstanding national and international conferences. In 2009, he was the recipient of the Gottfried Wilhelm Leibniz Prize, the most prestigious German research award.

"Learning Adaptive Models for Human-Robot Teaming" 

Atkinson Hall 4004/4006

Howard Thomas - University of Rochester

"Key Challenges in Agricultural Robotics with Examples of Ground Vehicle Localization in Orchards and Task-Specific Manipulator Design for Fruit Harvesting"

EBU 1 - Qualcomm Conference Room 

Amir Degani - Technion (Israel Institute of Technology) 

 

Dr Amir Degani is an Associate Professor at the Technion - Israel Institute of Technology. Dr Degani is the Director of the Civil, Environmental, and Agricultural Robotics (CEAR) Laboratory researching robotic  legged locomotion and autonomous systems in civil and agriculture applications. His research program includes mechanism analysis, synthesis, control and motion planning and design with emphasis on minimalistic concepts and the study of nonlinear dynamic hybrid systems.

His talk will present the need for robotics in agriculture and focus on examples of solutions for two different problems. The first is the localization of an autonomous ground vehicle in a homogenous orchard environment. The typical localization approaches are not adjusted to the characteristics of the orchard environment, especially the homogeneous scenery. To alleviate these difficulties, Dr Degani and his colleagues use top-view images of the orchard acquired in real-time. The top-view observation of the orchard provides a unique signature of every tree formed by the shape of its canopy. This practically changes the homogeneity premise in orchards and paves the way for addressing the “kidnapped robot problem”.

The second part of the talk will focus on efforts to define and perform task-based optimization for an apple-harvesting robot. Since there is a large variation between trees, instead of performing this laborious optimization on many trees, Dr Degani and his colleagues look for a “lower dimensional” characterization of the trees. Moreover, the shape of the tree (i.e., the environment) has a major influence on the robot’s simplicity. Therefore, Dr Degani and his colleagues strive to find the best training system for a tree to help simplify the robot’s design.

"The Business of Robotics: An introduction to the commercial robotics landscape, and considerations for identifying valuable robot opportunities."

Center for Memory Recording Research (CMRR)

Phil DuffyBrain Corp

Phil Duffy is the Vice President of Innovations at Brain Corp. As VP of Innovations, he leads product commercialization activities at Brain Corp to discover novel product and market opportunities for autonomous mobile robotics. His team is responsible for defining product strategy for Brain Corp's AI technology, BrainOS. A serial entrepreneur and product strategist, Phil has a proven track record for growing technology start-ups, and commercializing and launching innovative, robotic products in the B2B and B2C markets. Phil joined Brain Corp in 2014 and brings with him 20+ years leadership experience in product management, marketing, and China manufacturing. This talk will provide an overview of the commercial robotics landscape, identify valuable robot opportunities, and focus on important elements to consider when developing and marketing robotics technology.

"Efficient memory-usage techniques in deep neural networks via a graph-based approach"

Qualcomm Conference Room (EBU-1)

Salimeh Yasaei Sekeh - University of Maine 

Dr. Salimeh Yasaei Sekeh is the Assistant Professor of Computer Science in the School of Computing and Information Sciences at the University of Maine. Her research focuses on designing and analyzing machine learning algorithms, deep learning techniques, applications of machine learning approaches in real-time problems, data mining, pattern recognition, and network structure learning with applications in biology. This talk introduces two new and efficient deep memory usage techniques based on the geometric dependency criterion. This first technique is called Online Streaming Deep Feature Selection. This technique is based on a novel supervised streaming setting and it measures deep feature relevance while maintaining a minimal deep feature subset with relatively high classification performance and less memory requirement. The second technique is called Geometric Dependency-based Neuron Trimming. This technique is a data-driven pruning method that evaluates the relationship between nodes in consecutive layers. In this approach, a new dependency-based pruning score removes neurons with least importance, and then the network is fine-tuned to retain its predictive power. Both methods are evaluated on several data sets with multiple CNN models and demonstrated to achieve significant memory compression compared to the baselines.

"Human-Machine Teaming at the Robotics Research Center at West Point" 

Qualcomm Conference Room (EBU-1) 

Misha Novitzky - West Point 

Dr. Misha Novitzky is the Assistant Professor of the Robotics Research Center in the United States Military Academy at West Point. His work focuses on human-machine teaming for cooperative tasks in stressful and unconstrained environments. This talk will provide a brief overview of the various projects being conducted by the Robotics Research Center at the United States Military Academy, located in West Point, New York. In particular, the talk will focus on human-machine teaming. Most human-robot interaction or teaming research is performed in structured and sterile environments. It is our goal to take human-machine teaming outside into unstructured and stressful environments. As part of this effort, we will describe Project Aquaticus in which humans and robots were embedded in the marine environment and played games of capture the flag against similarly situated teams, and present results of our pilot studies. While Project Aquaticus was previously performed at the Massachusetts Institute of Technology, we will describe why the Robotics Research Center at West Point is an exceptional location to perform future human-machine teaming research.

"Introducing Qualcomm Snapdragon Ride"

Center for Memory and Recording Research (CMRR)

Ahmed Sadek - Qualcomm 

The Criticality of Systems Engineering to Autonomous Air Vehicle Development

Ariele Sparks - Northrop Grumman

Systems Engineering the World's Most Energetic Laser

Robert Plummer - LLNL

Applying a Decision Theoretic Framework for Evaluating System Trade-Offs

Nirmal Velayudhan - ViaSat

Scaling the Third Dimension in Silicon Integration

Srinivas Chennupaty - Intel

The Cost of Taking Shortcuts

David Harris - Cubic Transporation Systems

An Integrated Medium Earth Orbit - Low Earth Orbit Navigation, Communication and Authentication System of Systems

David Whelan - UC San Diego

Handling Scale (System and Developer) and Reliability in Large and Critical Systems

Sagnik Nandy - Google

People-First Systems Engineering: Challenges and Opportunities in Smart Cities

Jeff Lorbeck - Qualcomm