"Context: Accurate data about bug fixes is important for different venues of research, e.g., program repair. While automated procedures are able to identify bug fixing activities, they cannot distinguish between the bug fix and other activities that are happening in parallel, e.g., refactorings or the addition of features.
Objective: The creation of a large corpus of manually validated bug fixes and to gain insights into the limitations of manual validation.
Method: We use a crowd working approach to manually validate bug fixing commit and analyze the limitations.
Limitations: Insights limited to the Java programming language and possibly by the participants in the crowd working."