- Dr. Karen Feigh
- Montgomery Knight Building Rm. 419270 Ferst DriveAtlanta, GA 30332-0150United States
- (404) 385-7686
Karen Feigh is an Associate Professor in the Daniel Guggenheim School of Aerospace Engineering. She holds a B.S. in Aerospace Engineering from Georgia Tech, a MPhil in Aeronautics from Cranfield University, UK, and a Ph.D. in Industrial and Systems Engineering from Georgia Tech. Karen has previously worked on fast-time air traffic simulation, conducted ethnographic studies of airline and fractional ownership operation control centers, and designed expert systems for air traffic control towers. Her doctoral work was conducted at Georgia Tech's Cognitive Engineering Center where she used cognitive engineering methods to improve support system design methods to more closely match the dynamic needs of airline operations managers to aid with recovery from irregular operations. Her awards include the Marshall scholarship and the AIAA Orville and Wilbur Wright Graduate award.
Dr. Feigh's research interests include:
- Decision Support System Design
- How to design support systems for naturalistic decision making often found in aviation domains?
- How to design control algorithms to explicitly account for human limitations and actively bound and manipulate human workload?
- Computational Cognitive Modeling for Engineering Design
- How to incorporate cognitive models into the engineering design process?
- How to model human cognition at a level of abstraction appropriate for engineering design?
- How to advance theories of cognitive engineering into the realm of computation such that descriptive models can be transformed into prescriptive ones?
She is active in the design of cognitive work support systems for individuals and teams in dynamic socio-technical settings, including airline operations, air transportation systems, UAV and MAV ground control stations, mission control centers, and command and control centers.
- Ph.D. Industrial and Systems Engineering, Georgia Institute of Technology
- M. Phil. Engineering, Cranfield University
- B.S. Aerospace Engineering, Georgia Institute of Technology
- Professor, School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, Georgia (2020-Present)
- Associate Professor, School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, Georgia (2014-2020)
- Assistant Professor, School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, Georgia (2008-2014)
- Research Associate, Human Centered Systems, Advanced Technology, Honeywell 2007-2008
- Multi-disciplinary Engineer, CAASD, MITRE Corp. 2001
- AIAA, Wilbur and Orville Wright Graduate Award 2006
- Zonta International, Amelia Earhart Fellowship , 2005
- National Science Foundation Graduate Research Fellow 2001-2006
- Marshall Scholar 2001-2003
- AE 1601 - Introduction to Aerospace Engineering
- AE 3521 - Aircraft and Spacecraft Flight Dynamics
- AE 4552 - Humans and Autonomy
- AE 6551 - Cognitive Engineering
- AE 6721 - Evaluation of Human Integrated Systems
- AE 6551 - Human Contribution to Safety
Human teams are most effective when the members of the team utilize a shared mental model (SMM), meaning a shared perception of goals and actions through effective communication and an understanding of their fellow team members' goals and likely methods. Currently humans and AI teams share no such model. At best, humans working closely with AI begin to anticipate what the AI can do and when it can be trusted, as is the case in medical decision making.
At the very core of technological acceptance is human-machine trust and its fragility. In this project we have proposed and are testing models of human-machine trust that includes its antecedents such as faith in technology, familiarity, and situation awareness. Our research also incorporates expectations that the automation will be cooperative and perform ethically, legally, and abide by norms, drawing from many fields from human-robot interaction to game theory, social psychology, and management and information science.
NASA’s future missions will push the bounds of human-space exploration and challenge the mission designers and engineers to create automated systems that will enable the joint human-automation teams to operate more autonomously as they move further from terrestrially based mission control and the time lag of communication becomes a challenge.
Objective Function Allocation Method for Human-Automation/Robotic Interaction using Work Models that Compute
Future manned space missions will require astronauts to work with a variety of robotic systems. To develop effective human-robot teams, NASA needs objective methods for function allocation between humans and robots. This study develops an objective methodology for function allocation between humans and robots for future manned space missions. Some problems that need to be addressed in function allocation include: (a) monitoring of agents, (b) agents waiting on other agents (idle time), (c) high task load of agents, (d) excessive amount of communication required.
NSF-CPS: Adaptive Intelligence for Cyber-Physical Automotive Active Safety System Design and Evaluation
The main objective of this research is to use techniques and models from human factors, computational neuroscience, and adaptive and real-time optimal control theory in order to investigate the effects of the introduction of learning and adaptation to the next generation of ASCS. In particular, we will:
(a) Learn the driver’s habits, driving skills, patterns and weaknesses.
(b) Model his/her current cognitive state along multiple dimensions such as attentiveness, aggressiveness, etc.