The Adaptive Systems and Interaction team pursues the study and development of principles, applications, and tools for extending our understanding of principles of computational intelligence as well as developing and fielding trustworthy AI systems. Our work is motivated by the goal of developing systems that can perform well amidst the complexities of the open world, either via autonomous execution or in their collaboration with people. Efforts include the development of AI systems that complement and coordinate well with people and endowing the systems with abilities to explain their goals and reasoning. In this realm, our team is developing principles and guidelines for supporting human-AI interaction and collaboration. Achieving trustworthy and robust AI systems requires engineering and reasoning methods that can be employed to recognize systems’ weaknesses when they perform inference and decision making in different settings. Work in this area includes the development of principles and tools in support of developing and fielding robust, intelligible, and trustworthy AI systems. Finally, we seek to understand the influences of AI technologies on people and society as a whole, including ethical concerns, potential impact of AI systems on the future of work, and issues relating to privacy and AI safety. In our work, we pursue efforts in algorithms, design, user studies, and analyses of behavioral data.