Peer code review is a widely practiced software engineering process in which software developers collaboratively evaluate and improve source code quality. Whether developers can perform good reviews depends on whether… Click to show full abstract
Peer code review is a widely practiced software engineering process in which software developers collaboratively evaluate and improve source code quality. Whether developers can perform good reviews depends on whether they have sufficient competence and experience. However, the knowledge of what competencies developers need to execute code review is currently limited, thus hindering, for example, the creation of effective support tools and training strategies. To address this gap, we firstly identified 27 competencies relevant to performing code review through expert validation. Later, we conducted an online survey with 105 reviewers to rank these competencies along four dimensions: frequency of usage, importance, proficiency, and desire of reviewers to improve in that competency. The survey shows that technical competencies are considered essential to performing reviews and that respondents feel generally confident in their technical proficiency. Moreover, reviewers feel less confident in how to communicate clearly and give constructive feedback - competencies they consider like-wise an essential part of reviewing. Therefore, research and education should focus in more detail on how to support and develop reviewers' potential to communicate effectively during reviews. In the paper, we also discuss further implications for training, code review performance assessment, and reviewers of different experience level. Data and materials: https://doi.org/10.5281/zenodo.7401313
               
Click one of the above tabs to view related content.