Research 2.0: Confirmation Bias as a Human Aspect in Software Engineering

Date

January 16, 2013

Speaker

Gul Calikli

Affiliation

Ryerson University

Overview

Background: Data mining methods are used in empirical software engineering research to predict, diagnose, and plan for various tasks during the software development process. Such prediction models enhance managerial decision making. All the techniques so far used product and process related metrics in building predictive models.
Aims: Software is designed, implemented and tested by people. Therefore, it is important to gain insight about people’s thought processes and their problem solving skills in order to improve software quality. While solving problems during any phase of the Software Development Life Cycle (SDLC), software engineers employ some heuristics. These heuristics may result in “cognitive biases”, which are defined as patterned deviations of human thought from the laws of logic and mathematics. In this research, we focused on a specific cognitive bias called “confirmation bias”, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis.
Method: We defined a methodology to quantify/measure confirmation biases of software engineers by inheriting theories from the grounded work in cognitive psychology literature. We have come up with a “confirmation bias metrics set”.
Results: Our empirical results demonstrated that developers’ confirmation biases have a significant impact on the defect proneness of software. We found that individuals who have been trained in logical reasoning and hypotheses testing techniques exhibit less confirmatory behavior. By using developers’ confirmation bias metrics values as input, we built learning-based models to predict defective parts of software, in addition to building models that are learned from static code and churn metrics. The performance of defect prediction models built using only confirmation bias metrics was found to be comparable with the performance of the models that use static code and/or churn metrics.
Conclusions: We believe that next generation of empirical research in software engineering will bring more value to practice through better understanding of developer characteristics. Tool support is also necessary to measure, store and analyze such characteristics.

Speakers

Gul Calikli

Gul Calikli holds PhD and MSc degrees in Computer Engineering and a BS in Mechanical Engineering from Bogazici University. Previously she worked as a Research and Development Engineer at Alarko-CARRIER San. & Tic. A. S. and later as a Researcher at Software Research Laboratory at the Computer Engineering Department of Bogazici University. She is currently a Post Doctoral Research Fellow at Data Science Laboratory in Ryerson University. Her research interests are empirical software engineering, software measurement, cognitive aspects in software engineering, software defect prediction, mining data repositories and machine learning. She is a member of IEEE Computer Society and ACM.