CIS 700-002: Topics in Safe Autonomy, Spring 2019

Home | Lectures | Reading List | Schedule | Projects



Introduction

This course is to explore selected topics in safe autonomy. We are currently witnessing a technological paradigm shift, in which autonomous systems are increasingly deployed in society. Providing guarantees that these systems are safe is challenging, as evidenced by recent fatal accidents involving autonomous vehicles. This course is to study approaches proposed to assure safety of autonomous systems, as well as new autonomy algorithms that are more amenable to assurance. Topics to be covered includes anomaly detection, verification of neural networks and close-loop systems, human-in-the-loop, run-time verification of autonomous systems, high assurance reasoning, and confidence/trust evaluation. Grading will be based on class participation, a paper and/or tool-demo presentation, and a research project. The project will consist of one or more of the following: tool development, techniques for narrowing the gap between simulation and real system testing, analysis of industry-level data for anomaly detection and/or WPE II quality survey paper.

Part 1: Anomaly Detection

In this lecture series, we will discuss anomaly detection and its growing importance on the safety of autonomous systems. We will begin with an overview of the core components of basic anomalies, as well as a brief introduction into the detection systems used in classical settings. Next, we will shift our attention to time series (or range-based) anomalies, which are the fundamental type of anomalies we find in most of the technologically sophisticated systems that exist today. We will show how their emergence has introduced notable challenges in the reliability of anomaly detection systems and the metrics we use to evaluate them. We will conclude with a broad and deep analysis of the state-of-the-art anomaly detection research and review several important open research problems that we hope to solve in the next decade.

Bio: Justin Gottschlich is the lead artificial intelligence researcher for the programming systems group at Intel Labs. He is also Intel's principal investigator and co-founder of the joint Intel/NSF CAPA research center for programmability of heterogeneous systems. Justin conducts research in machine learning with an emphasis on machine programming, anomaly detection, and autonomous systems. He also oversees academic collaborations at Berkeley, Brown, MIT, Stanford, TAMU, and UW, as well as an industrial collaboration between Intel and BMW for anomaly detection on autonomous vehicles. Justin is a founding member of the Machine Learning and Programming Languages (MAPL) workshop. He has been both the general and program chairs for MAPL and TRANSACT and was previously the vice-chair of the C++ Standards Transactional Memory Working Group (SG5). Justin has around 30 peer-reviewed publications and has given over two dozen invited or research presentations in both industry and academia. He holds around 20 patents with over 50 pending. Justin completed his PhD at the University of Colorado-Boulder on optimizations for transactional memory systems. For fun, he runs a small online gaming company, Nodeka, LLC., where he has written around 500k lines of optimized, low-level C++ code.


Grading


Last updated on 1/28/19 by Taylor Carpenter.