Survey Software

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Information and Software Technology 150 (2022) 106972

Contents lists available at ScienceDirect

Information and Software Technology


journal homepage: www.elsevier.com/locate/infsof

Taxonomy of bug tracking process smells: Perceptions of practitioners and an


empirical analysis✩
Khushbakht Ali Qamar, Emre Sülün, Eray Tüzün ∗
Department of Computer Engineering, Bilkent University, Ankara, Turkey

ARTICLE INFO ABSTRACT

Keywords: Context: While there is no consensus on a formally specified bug tracking process, some certain rules and best
The bug tracking system practices for an optimal bug tracking process are accepted by many companies and open-source software (OSS)
Process mining projects. Despite slight variations between different platforms, the primary aim of all these rules and practices
Conformance checking
is to perform a more efficient bug tracking process. Practitioners’ non-compliance with the best practices not
Anti-patterns
only impedes the benefits of the bug tracking process but also negatively affects the other phases of software
Bug tracking smells
Process smell
development life cycle.
Objective: The goal of this study is to gain a better knowledge of the bad practices that occur during the bug
tracking process (bug tracking process smells) and to perform quantitative analysis to show that these process
smells exist in bug tracking systems. Moreover, we want to know the perception of software practitioners
related to these process smells and also observe the impact of process smells on the bug tracking process.
Methods: Based on the results of a multivocal literature review, we analyzed 60 sources in academic and gray
literature and propose a taxonomy of 12 bad practices in the bug tracking process. To quantitatively analyze
these process smells, we inspected bug reports collected from eight projects which use Jira, Bugzilla, and
GitHub Issues. To get an idea about the perception of practitioners about the taxonomy of bug tracking process
smells, we conducted a targeted survey with 30 software practitioners. Moreover, we statistically analyzed the
impact of bug tracking process smells on the resolution time and reopening count of bugs.
Results: We observed from our empirical results that a considerable amount of bug tracking process smells
exist in all projects and some of the process smell categories have statistically significant impacts on quality
and speed. Survey results shows that the majority of software practitioners agree with the proposed taxonomy
of BT process smells.
Conclusion: The statistical analysis reveals that bug tracking process smells have an impact on OSS projects.
The proposed taxonomy may serve as a foundation for best practices and tool support for detecting and
avoiding bug tracking process smells.

1. Introduction fixing, automated bug triage approaches [19–21] have been proposed.
Automatic tools have also been developed by researchers to find the
Bug tracking (BT) is a methodology for reporting and keeping track bugs [22,23] and generate the patches [16,24–27].
of bugs in software products. While there is no formal agreement on a To collect the set of deviations from the best practices, we explore
BT process, certain rules and best practices for an optimal BT process the bad practices that developers follow throughout the BT process.
are reported in both white [1–3] and gray literature.1 To denote these bad practices, we use the term bug tracking process
To improve and avoid potential non-compliances in the BT process, smells. Some of the issues in the BT process have been referred to in
many studies in the literature aim to build automated approaches previous work from various perspectives, such as the reassignment of
for bug tracking such as summarizing essential information [4–6], fields [28] and bug reports without comments [1]. However, none of
filtering duplicates [7,8], and predicting features (e.g. severity [9–
them have gathered the bad practices followed during the BT process
14], priority [15,16], and status of reopened/blocking [17,18]) for a
with a systematic approach. It would be valuable to systematically
reported bug. Moreover, for recommending the best developers for bug

✩ This study was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) 1505 program. Project Number: 5200078.
∗ Corresponding author.
E-mail addresses: ali.qamar@bilkent.edu.tr (K.A. Qamar), emre.sulun@bilkent.edu.tr (E. Sülün), eraytuzun@cs.bilkent.edu.tr (E. Tüzün).
1
https://bugs.eclipse.org/bugs/page.cgi?id=bug-writing.html

https://doi.org/10.1016/j.infsof.2022.106972
Received 1 December 2021; Received in revised form 4 April 2022; Accepted 27 May 2022
Available online 3 June 2022
0950-5849/© 2022 Elsevier B.V. All rights reserved.
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

collect these bad practices for software process improvement initiatives 2. Background
in bug tracking. We believe that this set of bad practices would be
valuable for software practitioners to identify possible bottlenecks or Bug Tracking is a process of streamlining and keeping track of the
problem areas in the bug tracking process. Moreover, if bugs are found reported software issues and disorders. A good bug tracking method is
in a timely manner and assigned to the proper developer, it will be self-explanatory and consistent. In order to check the conformance of
easier to fix them, reducing not only costs but also assisting in the developers to the bug tracking process, a generic bug tracking process
model needs to be defined as a baseline. Therefore, we conducted
system’s success.
a gray literature review on the bug tracking process. Figs. 1 and 2
To the best of our knowledge, this is the first systematic study to illustrate the activity and state diagrams of the generic bug tracking
collect bad practices in the BT process. To explore these bad practices process respectively. The bug tracking cycle begins when the users or
further, we identify the following research questions within our study: quality assurance testers spot an issue in the running program, or the
RQ1 — What are the observed bad practices followed by devel- developers spot an issue in the code. By combining all perspectives, the
opers during the bug tracking process? following generic process model of bug tracking is defined as follows
To address this RQ, we reviewed academic and gray literature
(1) Whenever a bug is reported, it is compared to other existing
and focused on the studies that address bad practices (anti-patterns)
bugs to check for duplicates, it is run through the Bug Tracking
and problems encountered during the BT process. By synthesizing this
Software’s (BTS) bug database to check if any such bug is already
information, we proposed a taxonomy of BT process smells to illustrate
open.
the cases where the developers do not conform to the ideal BT process. (2) If a duplicate is not found, a new bug is opened.
To explore the existence of these BT process smells in practice, the (3) After a bug is opened and a developer is assigned to it, this
following research question is defined: developer starts working on the bug to fix it.
RQ2 — How frequently does each BT process smell occur in (4) When the assigned developer believes he/she is done with the
practice? bug, he/she submits it for review by the reviewer.
To answer this research question, each BT process smell is empiri- (5) If the bug fails the review, it is reopened with feedback from the
cally evaluated by mining BT repositories of eight projects; two Bugzilla overseer, after which the developer fixes the problems with the
projects (Wireshark, GCC), five Jira projects (MongoDB Core Server, bug using said feedback and submits the bug for review once
Evergreen, Confluence Server & Data Center, which will be referred to again.
as ‘‘Confluence’’ in this paper, Jira Server & Data Center, which will be (6) If the bug passes the review, it is merged to its parent branch,
referred to as ‘‘Jira Server’’, Apache YuniKorn) and one GitHub Issues after which it goes through a series of tests to measure the
robustness of its integration to this parent branch.
project (Apache ShenYu).
The previous two RQs were explored in our previous study [29]. We A bug life cycle describes the workflow of a bug from its creation to
extend our prior work in this paper with the addition of two more RQs its resolution [30]. Fig. 2 shows the default workflow of Bugzilla2 where
To explore the perception of software practitioners about BT process the vertices represent the states and the edges represent the transitions
smells, the following research question is defined: between each state. When a bug is submitted, its state is either New or
RQ3 — What is the perception of software practitioners about Unconfirmed. A new bug from a user who has permission to create a
the bug tracking process smells? bug directly to the system without an Unconfirmed state is registered as
New while others are registered as Unconfirmed. When a bug is proven
To answer this research question, we conducted a survey with 30
or obtains enough votes, its status moves from Unconfirmed to New.
software practitioners who have experience with the bug tracking pro-
The status of a bug that is New or Unconfirmed switches to Assigned
cess. From survey results, we get an expert opinion on BT process smells when a developer is assigned to it. When a developer successfully fixes
and investigate if there are any actions taken in different companies to a bug, the status changes to Resolved. The bug’s status changes to New
avoid these process smells. when the developer stops working on it. When a bug is not genuine
Finally, we wanted to analyze if these process smells have an impact or is repaired willingly before it is assigned to someone, unconfirmed
on the BT process. Therefore, we statistically analyzed the effect of and new bugs can also shift to the Resolved status directly. Finally, the
process smells on the BT quality parameters like time to resolution status of the resolved bug switches to Verified once it has been verified
(TTR) of a bug and reopening count of a bug. This leads to the final by the quality assurance (QA) team. The bug status switches to Reopen
research question: if the quality assurance team is not happy with the solution or if the
RQ4 — What is the impact of BT process smells on the TTR and bug recurs. A bug that has been reopened can be assigned and resolved
again. If a bug in the Resolved, Verified state is reopened and never
reopen count of bugs?
validated, it will revert to the Unconfirmed state. When a bug has been
To answer this research question, we statistically analyzed the effect verified or resolved, it is marked as closed.
of process smell types on the TTR and reopening of bugs. We used
statistical tests to analyze if the effect is statistically significant or not. 3. Research methodology
The following section discusses the background of BT process. Sec-
tion 3 explains our research methodology. Section 4 demonstrates our We follow a mixed-methods-based approach to support our qualita-
BT process smells taxonomy and the technique for detecting each type tive analysis with quantitative results [31]. Fig. 3 depicts an overview
of smell. Section 5 presents the mining of BT process smells on eight of the research approach. The objective of this study is to recognize
projects. Section 6 describes the perception of software practitioners the bad practices (smells) in the BT process (RQ1) and to mine bug
repositories (RQ2) to show quantitatively that these smell categories
about the proposed taxonomy of BT process smells. Section 7 explains
exist in bug tracking systems (further details of the study setup are
the hypothesis testing we have done in order to analyze the impact
given in Section 5). Moreover, to understand the perception of software
of BT process smells. Section 8 discusses the context-dependency of BT
practitioners, we conduct a survey (RQ3). Finally to observe the prac-
process smells and implications for researchers & software practitioners tical effects of process smells on the BT process, a statistical analysis is
to avoid the smells during the BT process. Section 9 addresses the performed (RQ4).
validity threats of this study. Section 10 describes the related work
of this study and lastly, Section 11 presents our conclusion and future
2
work. https://www.bugzilla.org/docs/3.6/en/html/lifecycle.html

2
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Fig. 1. Activity diagram for the bug tracking process.

3.1. Multivocal literature review (MLR) - creating taxonomy The flow diagram of our MLR is shown in Fig. 4.

We selected MLR as our review method following the guidelines of 3.1.1. Search strategy
Garousi et al. [32]. Generally, in a systematic literature review (SLR), Kitchenham’s [36] guidelines are followed when scanning the aca-
the search is only limited to the academic literature [33]. However, it is demic literature. A typical systematic literature review (SLR), according
seen that many software developers do not publish their work formally to Kitchenham, can be divided into three categories: planning, con-
in academic forums [34]. Thus, we included GL (blog posts, white
ducting, and reporting the review. These steps are briefly described
papers, articles, etc.) as well. Although we acknowledge that blogs may
in the subsections that follow. The major goal of this research is to
not be as reliable as scientific papers, we consider them an important
show the problematic behaviors that occur throughout the bug tracking
source through which we can investigate the voice of practitioners.
process and organize them into a taxonomy. The requirement to review
We believe that the usefulness and relevance of our study for both
the white literature emerges in order to avoid missing any earlier
industrial and academic points of view (practitioners and researchers)
investigations pertaining to any of the proposed process smells. The
would be increased [35] by including the GL. Therefore, we constructed
authors devised and discussed a review methodology after clarifying
the literature search in two major steps.
the rationale for this SLR. Before starting the search, we defined the
(1) First, we conducted an SLR and performed a search using a search strings. Then, we used Google Scholar for formally published
search string in Google Scholar, then used those as a start- academic papers, the Google search engine for GL. The initial search
ing point for backward and forward snowballing of academic results for academic papers gave us 1704 results related to our search
literature. string. The search string for academic literature is as follows.
(2) Second, Google Search Engine was used to search for relevant (‘‘Issue’’ OR ‘‘Bug ’’ OR ‘‘Defect ’’) AND (‘‘Tracking ’’ OR
GL sources. ‘‘Management ’’) AND (‘‘Bad Practice’’ OR ‘‘Smell’’ OR ‘‘Anti-pattern’’)

3
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

mapped to the related BT process smell. For GL, we shortlisted 25


sources and each source is referring to at least one smell or bad practice.
We have provided the finalized source list in the Appendix at the end
of this paper in Tables 8 and 9 for ensuring the transparency of our
study.

3.2. Mining repositories

After MLR, the taxonomy of BT process smells is finalized. Then,


each process smell is evaluated on eight OSS projects that use Jira,
Bugzilla, and GitHub Issues as their BT tool. While selecting the
projects, in addition to the data availability criteria, we considered the
tool and organization diversity; the projects belong to five different
organizations (Atlassian, Mongo, GNU Project, Wireshark Team, and
Apache) and use three different issue tracking tools (Jira, Bugzilla,
and GitHub Issues). We provide the results of mining repositories in
Section 5.

3.3. Software practitioner’s survey

Related to RQ3, we want to know about the perception of soft-


ware practitioners about our proposed taxonomy of BT process smells.
For this purpose, we conducted a survey with software practitioners.
Following the MLR, we created an extensive survey3 for software
practitioners who are actively taking part in the BT process, using the
guidelines from Kitchenham and Pfleeger [38]. The aims of conducting
Fig. 2. Default workflow of the bug tracking process in BugZilla.
this survey are:

• To evaluate if the definitions of BT process smells are acceptable


For GL, we searched for the terms ‘‘bug tracking (219 results)’’, ‘‘bug to software practitioners.
management (208 results)’’, ‘‘defect tracking (230 results)’’, ‘‘defect • To get feedback on the process smell detection methods (specifi-
management (255 results)’’, ‘‘issue tracking (234 results)’’, and ‘‘issue cally, thresholds for classifying an instance as BT process smell).
management (244 results) and their back-links on Google Search. We • To get an experts’ opinion on the potential side effects and root
came up with a total of 1390 sources in the initial pool. To include all causes of each process smell.
the relevant sources, we conducted snowballing, as suggested by SLR • To get information about the actions taken by different software
guidelines [33], on the set of selected papers in our initial search pool. companies to avoid these process smells.
Afterward, we conducted the review using thematic analysis which
is a systematic process for extracting recurring themes from a series We first piloted our survey with six respondents from our research
of papers. Cruzes and Dyba [37] standards were followed for thematic group. We piloted with our research group members that also work
analysis. For each primary study, two authors independently read in the industry. These six responses helped us in refining the survey
the study and identified segments of text that would discuss ‘‘bad and also, in validating the survey’s ability to answer our RQ. To
practices’’/‘‘smells’’/‘‘anti-patterns’’ in the bug tracking process. They complete our survey, we contacted a total of 60 software practition-
translated these texts into summary themes. In the second iteration, all ers (Developer, Team Lead, QA, etc.) from authors’ professional and
of the authors of the study went through the summary themes, merge personal networks. The survey was completed by 30 software prac-
similar themes wherever possible and translate codes into common
titioners. All respondents work for software organizations around the
themes to create the taxonomy.
globe (e.g. Turkey, Pakistan, Denmark, Sweden). We used convenience
sampling to pick the participants rather than relying on a public survey.
3.1.2. Exclusion/inclusion criteria & quality assessment
We defined the exclusion and inclusion criteria carefully to ensure Our survey is quite an extensive survey, as it covers all 12 bug tracking
that we included all of the sources that are relevant to our study and process smells. It takes around 45–50 min on average to complete.
excluded all of the out-of-scope sources. For the quality evaluation of The respondents have an average software development experience
sources, we followed the checklist proposed by Garousi et al. [32]. of 8 years and bug tracking experience of 6.2 years. Table 1 show the
As inclusion criteria, we searched for studies describing the non- demographic information of the survey respondents. Fig. 5 shows the
ideal or bad practices followed during the BT process. We included job positions of the survey respondents. In the survey, each process
studies that discuss the anti-patterns in BT and deviation from the smell is defined in detail, along with a real-life example. The respon-
ideal BT process. As an exclusion criterion, we eliminated the studies dents are then asked a series of questions on how familiar they are
written in a language other than English. We also only considered the with each process smell. We also requested their feedback on a set of
relevant sources with clear author information (anonymous sources are ‘thresholds’ we employed in our empirical study. Section 6 goes into
eliminated) in Gray Literature. the survey’s setup and the results.

3.1.3. Data extraction & final pool of sources


For academic literature, we came up with a list of 35 primary studies
3
after applying inclusion/exclusion criteria. Each resulting paper was https://figshare.com/s/c493b64e4f1b5ef4337f

4
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Fig. 3. Research methodology followed in this study.

Fig. 4. Flow diagram of MLR.

Table 1
Demographic information of survey respondents.
Company type No. of Number of Gender Avg. BT experience
employees responses (in years)
Large-size software 1000+ 13 5 Female 7
companies 8 Male
Midsize 200+ 7 2 Female 9
software companies 5 Male
Small-size 50–100 10 1 Female 5
software companies 9 Male

(i.e. Bugs with process smell and Bugs with no process smell). Further
details about hypothesis testing are explained in Section 7.

4. Taxonomy of bug tracking process smells (RQ1)

RQ1 is about compiling the bad practices which are being followed
by practitioners during the BT process. MLR was conducted and we
observed the potential bad practices which do not comply with the
ideal flow of the BT process. This leads us to 12 BT process smells.
A summary of the proposed BT process smells taxonomy is given in
Table 2 as well as the potential impacts which are visualized in Fig. 6.
In this section, each smell is introduced in the following format: (1) a
Fig. 5. Job positions of survey respondents.
detailed description of the smell (2) our smell detection methodology.

4.1. Unassigned bugs


3.4. Hypothesis testing
Each reported bug must be triaged to decide if this bug describes
To analyze the effect of process smells on time to resolution (TTR) a significant and new enhancement or problem, it must be assigned to
and bug reopening, we use Mann–Whitney U Test [39] which is a non- an appropriate developer for further investigation [40]. Once a bug is
parametric statistical test. We chose this test because the data is not assigned, it is resolved according to the priority, severity, and workload
normally distributed and we are comparing two independent groups of the developers after the bug status is set as open. However, this

5
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Table 2
Summary table of bug tracking process smells.
Smell name Description Possible root causes
Unassigned Bugs No assignee, but the bug is fixed and closed Developer’s availability, Time pressure on triager, Triager could not find an
expert developer
No Link to Bug-Fixing Commit If a bug is closed without any link to bug-fixing Developer forgets to mention, Committing link is not straightforward in BTS,
commit Weak understanding of BTS
Ignored Bugs Bugs that are left open for a long time or bugs Incorrect severity indication, Inadequate bug description, Incorrect
that have incomplete resolution prioritization, Overlooked bug
Bugs Assigned to a Team The bugs which are assigned to a team but not a Mistakenly assigned by the triager, Unavailability of a developer
particular assignee
Missing Priority The priority of the bug has not been set Expertise of triager, Wrong bug severity, Overlooked by triager
Not Referenced Duplicates Duplicate bugs that do not have a reference to Person marking duplicate is new to the system, Being unaware of the
their original bug previous bugs, Poor search feature of BTS to find duplicates
Missing Environment Environment information (Version, OS, and Reporter forgets to mention, Reporter does not know the environment details,
Information Component of the product) about the bug is Reporter was short in time to mention information
missing
Missing Severity The severity of the bug has not been set Triager overlooks
Reassignment of Bug Assignee Fixer for the bug is assigned more than once Reassignment of some fields cause others to be reassigned, Triager does not
know the suitable developer, The developer recommendation system is not
integrated into BTS, Admin batch operations
No Comment Bugs A resolved bug with no comments Ignored bug, Developers/Contributors forget to write comments, Developer
might be too busy to write a comment
Non-Assignee Resolver of Bug When a bug is resolved by any person other than Assignee forgot to close, Bug was originally resolved by someone else, Bug
assignee(s) can be closed by administrative roles, Assignee might not be able to resolve
and toss it to other developers
Closed–Reopen Ping Pong Bugs which are reopened Insufficient unit testing, Ambiguous bug specifications, Changed bug scope,
Poorly/incorrectly fixed bugs, Tester not testing properly

Fig. 6. Impacts of bug tracking smells.

is a time-consuming process, research has been done to automatically 4.2. No link to bug-fixing commit
assign a competent developer to the new bug reports [20,41]. Bug
assignment lets users focus on their assigned duty and ease the process In software engineering, software artifacts especially commit logs
of tracking a bug life cycle. However, it is observed that there are and bug reports are widely used. To obtain useful information about a
bugs that have no assignee at all even if the bug is resolved. This is software project’s evolution and history,all bug-fixing commits should
a potential indicator that the BT process is not followed properly [42]. be linked to their respective bug reports [45]. If a bug is closed without
Thus, we consider it a process smell and call it an unassigned bug. The any link to the bug-fixing commit, then in the future, it will be difficult
potential impact of not assigning a developer to a bug could be a loss to discover what happened with that bug. We evaluate this lack of
traceability as a process smell and we call this No Link to Bug-Fixing
of traceability [43] and delays in bug resolution [44].
Commit. The potential impact of this process smell is losing track of the
Smell Detection Method: First, we check whether the bug is fixed bugs and eventually the traceability of the bug decreases [46]. It also
and closed. If so, we check whether the assignee field is empty or affects the related software development tasks such as prediction of bug
not. However, there are also some cases where the assignee field is locations, recommendation of bug fixes, and software cost [46].
not empty but an invalid email address is written. For instance, if Smell Detection Method: First, we check whether the bug is fixed. If
the unassigned@gcc.gnu.org address is used as an assignee email, we so, we check the comments and designated fields to find a link to the
consider this case a smell as well. version control system.

6
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

4.3. Ignored bugs 4.6. Not referenced duplicates

If the problem of duplicate bug recognition is solved, it enables the


In a BTS, once the bug is opened, it should not be left unattended or
developers to fix bugs more efficiently rather than waste time resolving
open for a long time. The knowledge of the bug may be forgotten over
the same bug [55,56]. However, it is observed that some bugs that
time [47]. Even if it is not closed, some progress should be made to
are marked as duplicates in BTS are not referenced to the original
resolve the bug. However, it is seen that there exist some bugs which
bug within the references section of a bug report. Instead, the reporters
are neither assigned to anyone nor resolved. They are left in an idle
only put the duplicate keyword into the status section, which reduces
state for a long time (three months or longer). Another scenario could the traceability. Most of the bugs have their duplicate bug referenced
be an Incomplete resolution of bugs. It is observed that some bugs are correctly in the references section which increases the traceability of
marked as Incomplete resolution states. No information is given in the the bug. However, some of them do not have their duplicate bug IDs in
bug report about what has been done to make such bugs incomplete. the references section. As far as we have observed, most of these bugs
For both the above-mentioned scenarios, we call this Ignored Bugs smell. still reference the duplicate bug in some way, such as referring to it in
The potential impact of this process smell is to create a delay in the bug the comment section. But some of them are marked as duplicate and
resolution process [48]. do not have any reference to the duplicate bug, and it is a deviation
Smell Detection Method: We compare the dates of sequential activities from ideal BT behavior. Therefore, we call it not referenced duplicates
in the bug history. While the bug is not resolved, if there is a gap longer smell. The potential impact of this process smell is on the identification
than three months between any two activities, we consider it a smell. of master bug reports [57].
Smell Detection Method: First, we check whether the bug is marked
as a duplicate. If so, we check the linked issues field to find whether
4.4. Bugs assigned to a team the duplicate bug is linked and has a reference to the other bug.

During the bug resolution process, a bug must be assigned to a 4.7. Missing environment information
particular developer so that it could be resolved. During our analysis,
we observed that some of the bugs are assigned to a team rather than During the BT process, it is necessary to know where the bug has
a particular developer. Moreover, in some of those cases, it is also been encountered [53] as bugs often only happen in certain environ-
observed that these bugs had no fixes or any comments in a short time, ments [58]. It is critical to ensure that all the information related to the
environment of that particular bug is listed (e.g. operating system (OS),
which is a deviation from the ideal BT process. However, in distributed
the browser, the version of hardware and software, and a component
environments, a network of people may cause the dispersal of respon-
of the system in which that bug is encountered [42]).
sibility as ‘‘everybody’s problem becomes nobody’s responsibility’’ [49].
Every field in a bug report has its importance, and if they are
Whenever a bug is assigned to any team, and it is not specified which
present, they help the developer to resolve the bug quickly. A version
member of the team is going to solve that bug; it becomes everyone’s
of the product is an important field. If version information is missing
problem but no one has its responsibility individually. We call these
in the bug report, the developer does not know in which version the
Bugs Assigned to a Team smell. Potential impacts of this process smell user or tester is having this bug. So, missing version information in the
could be loss of traceability and accountability of bugs [50]. The prob- bug report is considered to be a smell. To have complete information
lem of team bug assignment was introduced by Jonsson et al. [50,51] in about a bug, the component information should also be mentioned in
which instead of a single developer, bugs are redirected to one of the the bug report [3]; otherwise, it would be difficult to locate the bug
many accessible teams. within the product. Similarly, it is important to know on which OS the
Smell Detection Method: First, we check whether the bug is assigned. bug was encountered (e.g. Windows, Linux, macOS). Therefore, we are
If so, we search for the selected keywords: ‘‘team’’, ‘‘group’’ and ‘‘back- also considering the missing component and OS information a process
log’’ in the assignee field. We found those keywords by manually smell. If all of this information is missing, then we call this smell Missing
inspecting the assignee names in each project. For example, in the Environment Information. The potential impact of this process smell is on
MongoDB Core Server project history, there are some bugs assigned to bug replication. The bug with no environment information is difficult
Backlog-Sharding Team, which we consider it as a smell. to reproduce [52].
Smell Detection Method: We check whether the component, version, en-
vironment, and operating system fields are empty. If all of these properties
4.5. Missing priority are empty, we consider it a smell.

During the life cycle of a bug, the bug’s priority plays a very impor- 4.8. Missing severity
tant role in its resolution [15]. Whenever a bug is reported, it might be
possible that many other functionalities of the system are dependent The bug report is triaged and the severity (e.g., low, medium, high)
of the bug is assigned after a bug report has been submitted. The task of
on that particular bug, so it needs to be resolved as soon as possible.
assigning a bug severity is a resource-intensive task [9]. Severity ratings
Priority refers to how quickly a bug needs to be resolved and the order
help in determining the priority of a bug i.e. in which order the bugs
in which developers have to fix bugs [52,53]. Correctly assigning bug
should be fixed. A bug could be incorrectly prioritized if the severity
priority is integral to successfully plan a software development life
of the bug is not mentioned, which in turn can affect the quality of the
cycle. There are different levels of bug priority e.g. Low priority, Medium
product that is being developed. It also helps into whom the bug should
priority, and High priority depending upon its effect on the system. We
be assigned to fix it [42,53]. Therefore, it needs to be ensured the bug
are considering not prioritizing bugs as a process smell and calling it severity is correctly assigned. Several methods have been developed
a missing priority. The potential impact of this process smell is on the to automate the assignment of a bug severity [14]. In our case, if the
development-oriented decisions (time and resource allocation) [54]. severity information is missing, we consider it a process smell and call
Smell Detection Method: We check whether the priority field is valid this smell missing severity. The potential impact of this process smell
or not. Some of the invalid priority strings are None, Not Evaluated, and is; it affects the resource allocation and planning of other bug fixing
‘‘–’’. activities [13].

7
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Smell Detection Method: We check whether the severity field is someone other than the assigned-to developer resolves the bug report,
valid or not. Some of the invalid severity strings are N/A and ‘‘—’’. then we consider it a process smell and call it a non-assignee resolver of
The detection mechanism may change across different BT tools. For the bug. The potential impact of this process smell is on the traceability
example, Jira does not include a severity field by default but some of a bug [43].
organizations create their custom fields. For example, in the Atlassian Smell Detection Method: First, we check whether the bug is assigned
organization, Symptom Severity and Common Vulnerability Scoring System and resolved. If so, we compare the assignee and the person who
(CVSS) severity fields are introduced. We treat them as an ordinary resolved the bug.
severity field while mining the bug tracking history.
4.12. Closed–reopen ping pong
4.9. Reassignment of bug assignee
Reopened bugs are those that were previously closed by the devel-
Research shows that the time-to-correction for a bug is increased by opers but were later reopened for various reasons (such as not being
the reassignments of developers to a bug [20]. Therefore, we consider reproduced by the developer or improperly tested fix). Reopened bugs
the reassignment of developers to the same bug a process smell and call reduce the overall software quality, increase maintenance expenses, as
this smell reassignment of the bug assignee. well as unnecessary rework by developers [63]. In a project when a
The potential impact of this process smell is an increase in the bug significant number of bugs are reopened frequently, then it can be an
fixing time, which eventually delays the delivery of the product [28]. indication that the project is already in trouble, and may be heading
Smell Detection Method: The mining strategy for this smell is to towards trouble soon [64]. We call this smell closed–reopen bug ping
look in the bug history for the assignee property. If the assignee field pong ; as it is the ping pong among the bug states during its life cycle.
is changed more than twice, then we count it as a smell. Also, we Potential impacts of reopened bugs could be; they take a notably longer
observed that there are some multiple assignee changes done in a very time to resolve [65]. Reopened bugs also increase development costs,
short interval. We consider these multiple assignee changes as a mistake affect the quality of product, prediction of release dates reduce the team
and do not count them as a smell. The interval duration is set as five morale leading to poor productivity [64].
minutes. Therefore, if multiple assignments are done in five minutes, Smell Detection Method: We check the history of the status field of the
they are counted as one assignment. bug. Some projects explicitly use REOPENED value for the status field
while others do not. In such cases, we check whether the status field is
4.10. No comment bugs changed from Closed to another value, and we count it as a smell. Also,
we observed that there are some multiple status changes in a very short
Generally, two of the important sources of information for de- interval. We consider these changes a mistake and do not count them
velopers during the software development life cycle are bug reports as a smell. The interval duration is set as five minutes. Therefore, if a
and the comments that are associated with bug reports [1,53]. When bug status is changed multiple times in five minutes, it is counted as
stakeholders or developers do not understand the bug then at that time one change.
comments play a vital part [59].
In BTS, comments can be posted by anyone in response to an initial 5. Mining repositories (RQ2)
bug report. Therefore, it means that for some notions of popularity,
comment count can be used as a proxy. Textual contents of bug reports RQ2 investigates whether the BT process smells we proposed in our
such as descriptions, summaries, and comments have been utilized taxonomy exist in bug repositories. To answer this research question,
by textual information analysis-based approaches to detect bug dupli- each BT process smell is empirically evaluated by mining the bug
cates [60]. In proposing the Bug Reopen predictor [17] features like repositories of all eight projects. The details about the dataset, data
description features, comment features, and meta-features are being cleaning procedure, and the details of mining process are given in the
used. For identifying the blocking bugs [18], the most important fea- following subsections.
tures are comment size, comment text, reporter’s experience, and the
count of developers. Comments on the bug report serve as a forum for 5.1. Dataset
discussing the feature design alternatives and implementation details.
Generally, the developers who have an interest in the project or who We analyzed eight large-scale software projects that have a pub-
want to participate post the comments and indulge in a discussion licly available issue tracking system. The projects are GCC,4 Wire-
on how to fix the bug. Considering the importance of comments in shark,5 Confluence,6 Jira Server,7 MongoDB Core Server,8 Evergreen,9
bug reports, we observed that some bug reports have no comments ShenYu,10 and YuniKorn.11 While selecting the projects, in addition to
and consider it a process smell. We call this smell no comment bugs. the data availability criteria, we considered the tool and organization
The potential impact of this process smell is on the bug resolution diversity; the projects belong to five different organizations (Atlassian,
time and other linked software development activities such as de- Mongo, GNU Project, Wireshark Team, and Apache) and use three
veloper recommendation, duplicate bug detection, and bug reopening different issue tracking tools (Jira, Bugzilla, and GitHub Issues).
prediction [61,62]. We utilized the Perceval tool [66] to fetch the issue tracking histo-
Smell Detection Method: First, we check whether the bug is closed. If ries of those projects. When fetching the histories, we set a time limit
so, we check whether there is at least one comment in the bug. (up to December 2020) for older projects but did not set a limit for
recent projects (ShenYu and YuniKorn). The issue tracking system of
4.11. Non-assignee resolver of bug Wireshark was migrated from Bugzilla to GitLab in August 2020. Thus,

In BTS, whenever a bug is encountered, it should be well docu-


4
https://gcc.gnu.org/bugzilla/
mented and then resolved so that it can be traced later on if required. 5
https://bugs.wireshark.org/bugzilla/
Therefore, it is important for traceability that bugs are assigned and 6
https://jira.atlassian.com/projects/CONFSERVER
resolved by the same person during their life cycle. To promote trace- 7
https://jira.atlassian.com/projects/JRASERVER
ability, whoever is the assignee of a bug, that person should be the 8
https://jira.mongodb.org/projects/SERVER
person to resolve the bug. Applying this will be helpful to have a ‘‘go- 9
https://jira.mongodb.org/projects/EVG
to person’’ if things fail. However, if this practice is not followed, it 10
https://github.com/apache/incubator-shenyu/issues
11
could be difficult to understand why some person has resolved a bug. If https://issues.apache.org/jira/projects/YUNIKORN

8
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Table 3
The details of projects.
Project Starting year Ending year Number of issues Number of bugs Lines of code
GCC 1999 2020 76910 69784 10 MLOC
Wireshark 2005 2020 16609 13420 4.6 MLOC
Jira Server 2002 2020 46097 21609 Unknown
Confluence 2003 2020 42934 25841 Unknown
MongoDB Core Server 2009 2020 50147 22593 5.4 MLOC
Evergreen 2013 2020 10545 2688 476 KLOC
ShenYu 2019 2022 1257 279 108 KLOC
YuniKorn 2020 2022 1138 284 31 KLOC

Fig. 7. Empirical evaluation steps.

the latest issue of the Wireshark project is updated in August 2020. The For example, to be considered an Assigned to a Team smell, a bug report
starting dates are given in Table 3. Also, we share the project history must be assigned. That is, 45 bug reports are neither assigned to a team
datasets in our replication package.12 nor a person. All preconditions are given in the smell detection methods
of each process smell in Section 4.
5.2. Data cleaning and preprocessing A visual demonstration of Table 4 is given in Fig. 8. It shows the
ratio of process smells for each project, the smell ratio is normalized
After downloading the histories of the issue tracking systems, we according to the number of bugs. For example, in the Evergreen project,
extracted the bug reports. An issue may fall into one of the different 637 bugs are assigned to a team, and the total number of bugs that
categories such as feature requests, bug reports, and enhancements. We are eligible for this smell is 637 + 2006 = 2643. Thus, the ratio is
only processed bug reports and removed other types of issues from our 637∕2643 = 0.24 which is shown in lighter blue in the figure.
dataset. Since Atlassian projects (Jira Server and Confluence) are closed
The default issue type in Bugzilla projects is a bug. In Jira, we only source projects, we could not find any link from their issue tracking
included the issues whose type is bug or defect.In GitHub Issues, we systems to a version control system. Therefore, we mark ‘No Link to
only considered the issues labeled as bugs or defects. The total number Commit’ as not applicable to those projects, and the corresponding cells
of bugs and issues are shown in Table 3. in Fig. 8 are left blank. We also evaluated how the number of BT smells
changes over time. Fig. 9 shows the change of the smell ratio for each
5.3. Mining bug tracking process smells project over the years. It can be observed that the number of process
smells tends to decrease over time for almost all projects.
Following the retrieval of project histories and filtering irrelevant To understand the possible reasons for the decrease, we investigated
issue types, we run the smell detection scripts which generate a matrix the change of all BT process smells in a project. For instance, the No
of bugs and process smells. In the matrix, each row corresponds to a Link to Bug-Fixing Commit smell has a sudden drop in the Wireshark
bug, and the columns represent the process smells in addition to the project starting from 2013, which is shown in Fig. 10. While searching
bug metadata such as unique bug identifier and date created. The flow the mail lists, we found out that Wireshark developers adapted Git and
of the evaluation is visualized in Fig. 7. Gerrit tools in December 2013.13 Links between commits and bugs are
Based on the generated matrix, we analyze the density of the smells connected after the adaptation of those tools.
for each project. Table 4 displays the total number of bug reports that Another decrease in the number of BT process smells can be ob-
have the appropriate process smells for each project. The True column served in Confluence project. Fig. 11 shows how the number of Missing
expresses how many of the eligible bugs have the corresponding process Severity smells decreases over time. After Atlassian added a new sever-
smell, the False column expresses how many of the eligible bugs do ity field to their projects in August 2016,14 the ratio of this process smell
not have the corresponding process smell and the NA (not applicable) declined rapidly. However, the decline does not begin in 2016. The
column expresses how many bugs are not eligible for that process smell. reason behind the drop before 2016 probably depends on the lifetime
For example, in the Evergreen project, 637 bug reports are assigned of the bugs because we count a bug as belonging to the year it was
to a team, 2006 bug reports are not assigned to a team. Also, 45 bug created. In other words, even though a bug is created before 2016, its
reports are marked as NA for that smell. A bug report is marked as NA
if it does not hold the precondition of the corresponding process smell.
13
https://www.wireshark.org/lists/wireshark-dev/201312/msg00217.html
14
https://www.atlassian.com/blog/announcements/realigning-priority-
12
https://figshare.com/s/ca6e8ac4146b9d4e1cac categorization-public-bug-repository

9
K.A. Qamar et al.
Table 4
Number of bug reports that have the corresponding smells for each project.
Confluence (J) Evergreen (J) GCC (B) Jira Server (J) MongoDB Server (J) Wireshark (B) ShenYu (G) YuniKorn (J)
False True NA False True NA False True NA False True NA False True NA False True NA False True NA False True NA
Assigned to a team 10917 0 14924 2006 637 45 23325 0 46459 6962 0 14647 17953 2188 2452 219 0 13201 46 0 233 255 0 29
Ignored 741 25100 0 823 1864 1 42283 27501 0 588 21021 0 10487 12106 0 9967 3453 0 260 19 0 103 180 1
Missing environment 24010 1831 0 2256 432 0 69784 0 0 20415 1194 0 20270 2323 0 13420 0 0 21 258 0 234 50 0
Missing priority 25841 0 0 2688 0 0 69784 0 0 21609 0 0 22591 2 0 13420 0 0 0 279 0 284 0 0
10

Missing severity 8494 17347 0 0 2688 0 69784 0 0 8066 13543 0 0 22593 0 13420 0 0 0 279 0 0 284 0
No comment 19854 2951 3036 1621 306 761 57660 8 12116 13299 4492 3818 20612 1135 846 12510 27 883 192 63 24 181 44 59
No link to commit 0 0 25841 1433 331 924 3997 34223 31564 0 0 21609 9495 3589 9509 2677 5285 5458 156 99 24 211 0 73
Not referenced duplicate 2302 456 23083 182 51 2455 8277 0 61507 2124 230 19255 2499 434 19660 2972 0 10448 1 1 277 16 5 263
Reassignment 24772 1069 0 2487 201 0 69196 588 0 19952 1657 0 19801 2792 0 13416 4 0 279 0 0 277 7 0
Reopen ping pong 24661 1180 0 2562 126 0 66750 3034 0 20855 754 0 21630 963 0 12775 645 0 272 7 0 274 10 0
Resolver is not assignee 8019 2821 15001 1867 508 313 18662 3342 47780 5215 1172 15222 16195 3139 3259 88 123 13209 9 37 233 74 162 48
Unassigned 6225 2665 16951 1316 6 1366 21110 16696 31978 5759 1975 13875 12733 351 9509 188 7656 5576 46 209 24 185 1 98

B: Bugzilla, J: Jira, G: GitHub Issues.

Information and Software Technology 150 (2022) 106972


K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Fig. 8. Heatmap of smell ratio. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Fig. 9. Smell ratio change of all projects by year.

severity field is updated after 2016 and that is why we observe the priority and severity of the bug. We observed that the ratio of closed–
decline. reopen ping pong is comparatively less than other smell categories as
For the same project, no smell data is available for Resolver is not Fig. 8 shows. This potentially shows that reopening the bug is avoided
Assignee smell after 2018. To understand the reason behind that, we during the BT process as it increases maintenance costs [65].
analyzed the bug transition history of this project before and after 2018. Fig. 12 shows the percentage of bugs affected by the multiple smells.
Our analysis shows the project stopped using the Resolved state and The numeric values from 0 to 6 indicate the number of smells. The bugs
bugs are no longer transitioned to the Resolved state. in Evergreen and MongoDB Core Server projects have at least one smell.
Also, no bug has more than six smells in any project. Fig. 12 implies
that the majority of bugs have one or two smells.
5.4. Comparison of bug tracking tools
5.5. Time-based analysis
The results of the empirical analysis are given in Table 4 which
implies that each BT process smell occurs with varying ratios in all We also demonstrated how the number of BT process smells changes
projects. For two Jira projects (Confluence and Jira Server), No link over time. Fig. 9 shows the change of the process smell ratio for each
to Bug-Fixing Commit gives no results. Due to the privacy policy of project over the years. If we look at these concerning the BT tools, then
these projects, we could not access the commit links. In three of the we can say that over time BT tools evolved and get more advanced.
Jira projects (Evergreen, MongoDB Core Server & Yunikorn), there is no As it is already mentioned, for obtaining the severity of Jira projects,
severity field. However, in the other two Jira projects (Confluence and the custom user-defined field Symptom Severity was analyzed. Some
Jira Server) severity is defined in terms of custom-field as ‘Symptom Jira Projects like Mongo DB Core Server, Yunikorn & Evergreen lack the
Severity’. On the other hand, since severity is a mandatory field that severity field as shown in Table 4. Consequently, it is observed that
needs to be set while submitting a bug report in Bugzilla’s projects, the over the years, best practices for the BT process have been adopted.
smell ratio is 0%. Whereas in Github Issues, there is no separate field for Fig. 9 shows that some projects were slow in adopting best practices
priority and severity. Developers mention them using labels. However, like GCC & MongoDB Core Server. The graph shows that practices being
in SheyYu project the smell ratio for ‘Missing Severity’ and ‘Missing followed within the organizations related to these projects evolved
Priority’ is all TRUE because developers do not use any labels to specify slowly over time. Whereas, in other projects, Confluence, Jira Server,

11
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Fig. 10. Smell ratio change of wireshark project.

Fig. 11. Smell ratio change of confluence project.

Fig. 12. Percentage of bug affected by different no. of smells.

12
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Table 5
Survey results.
Smell name Do you agree with the Which of these smells you were
definition of BT process smell? aware before our survey?
Unassigned bugs 96.7% 80.0%
No link to bug-fixing commit 93.3% 66.7%
Ignored bugs 96.7% 86.7%
Bugs assigned to a team 83.3% 60.0%
Missing priority 83.3% 76.7%
Not referenced duplicates 93.3% 73.3%
Missing environment information 100.0% 76.7%
Missing severity 93.3% 63.3%
Reassignment of bug assignee 80.0% 66.7%
No comment bugs 80.0% 66.7%
Non-assignee resolver of bug 66.3% 56.7%
Closed–reopen ping pong 93.3% 66.7%

Fig. 13. Distribution of answer to the question: How often do you encounter a process smell in your company?

Evergreen, Wireshark, Yunikorn, and ShenYu the best practices were based on the survey, we have mentioned the actions taken by different
adopted quickly to avoid bad practices during the BT process over time. companies to avoid the occurrence of process smells. We explain the
perception of practitioners for each process smell in the following sub-
sections. Each subsection has (1) Respondents’ reasons of disagreement
6. Survey results (RQ3) with the process smell. (2) How often do they encounter the process
smell during the BT process. (3) Actions taken in companies to avoid
To explore the perception of software practitioners about our pro- the smell. (4) Possible root causes (the number in the bracket shows the
posed taxonomy of bug tracking process smells, we conducted a survey number of responses), and additional comments from the respondents.
with 30 software practitioners. In our survey, each process smell is
defined in detail, along with a real-life example. The respondents are
6.1. Unassigned bugs
then asked a series of questions related to each process smell.
The majority of respondents agree with our list of BT process smells,
29 out of 30 respondents agree with this process smell which implies
according to the survey results (Table 5). We questioned our survey
this process smells practically exist within software development com-
participants on how often they encountered this smell during the BT
panies. One practitioner who does not agree to this process smell states
process. To get an expert opinion about the potential root causes of all
his reason as: ‘‘I only came across unassigned bugs when the bug report was
the process smells, we have enlisted the potential root causes and asked
a duplicate/not needed or the bug was not reproducible. It’s possible that
practitioners to choose among them as per experience and if we have
missed any factor, they can tell us in open-ended questions. Similarly, such bugs were resolved during triage (for example by the PM) so I don’t
for the impacts of BT process smells, we questioned our participants by think they are indicators of wrong practice’’.
enlisting potential impacts and asked them to write any other impact Afterward, we asked practitioners about how often they encoun-
they could think of in open-ended questions. Moreover, we asked the tered unassigned bugs during the BT process, most of the practitioners
practitioners if they were aware of these process smells before (Table 5) said ‘Rarely’ (Fig. 13). To be aware of the actions taken against this pro-
and if there are any actions taken in their companies to prevent the cess smell in different companies, we asked our respondents if they are
occurrence of these process smells. Furthermore, we asked them the aware of any actions taken to avoid this process smell, 18/30 respon-
thresholds for some of our process smells and adjusted our thresholds dents have said ‘Yes’ (Fig. 14). Some of those actions are mentioned
according to the practitioner’s feedback. The distributions of responses below;
to the Likert Scale questions were also used to demonstrate the survey ‘‘Defined SOPs/steps to report bugs’’
members’ differing viewpoints which are given in Figs. 13 and 14. ‘‘This doesn’t happen in our company. Assignee is a mandatory field to
This section uses quotes from open-ended survey questions to ex- change the status of the bug to assign’’.
press practitioners’ perceptions of all the process smells. Moreover, ‘‘Unassigned bugs cannot progress to certain life-cycle stages’’

13
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Fig. 14. Distribution of answer to the question: Are there any actions taken to prevent smell in your Company?

‘‘Bugs are monitored by team leads and are assigned to developers on a year, other) 17 respondents voted for 3 months, while some of the
daily basis. A red flag is raised by the manager if unassigned bugs are in a respondents gave their suggestions for 2 weeks. We have changed
bulk’’ our analysis according to survey responses and it is 3 months now.
‘‘Bug cannot be opened without an assignee. At first, it should be assigned Moreover, in an open-ended question respondents mentioned that the
to a technical lead or product architect’’ time limit for ignored bugs depends on;
We asked practitioners for their expert opinion about the poten- ‘‘Depends on the content of the bug, the effect of it on the code, the
tial root causes of this smell and some of the potential root causes project phase/urgency, the impact of the bug’’
are: Availability of a developer (11/30), Time pressure on the triager The majority of the respondents selected ‘Rarely’, when we asked
(14/30), and Triager could not find an expert developer for the given how frequently they encountered ignored bugs (Fig. 13). Regarding ac-
bug (12/30). In open-ended questions, some potential root causes given tions taken in companies for avoiding ignored bugs, 17/30 respondents
by practitioners are: Multiple developers working on development and responded positively (Fig. 14) and some of those actions are as follows:
wrong risk management/planning or insufficient process rules. ‘‘All bugs are regularly reviewed by product managers. (Though they are
not that good at pruning the bug list)’’.
6.2. No link to bug-fixing commit ‘‘We deploy configuration control boards (CCB) which go through bugs
and other issues for resolutions, possible state changes, and reassignments
28 out of 30 respondents agree with this process smell. One of the when needed’’.
reasons for not agreeing with the process smell is ‘‘Usually developers We asked practitioners for their expert opinion about the poten-
put version numbers (which include the build number from the CI platform) tial root causes of this smell and some of the potential root causes
in the issue details which is used as a reference’’ are: Incorrect severity indication (11/30), Vague or inadequate bug
The majority of respondents said they encountered this process description (17/30), Incorrect prioritization (17/30), Overlooked the
smell ‘Rarely’ during the BT process (Fig. 13). 20/30 respondents said bug (12/30). One other potential root cause mentioned by one of the
there are actions taken in their companies to avoid this process smell respondents is: maybe the reported bug is not really a bug (faulty
(Fig. 14). Some of those actions are mentioned below; triaging).
‘‘Configuration management department set hooks, in this way, if the
developer does not indicate issue number such as Jira#Project_Abbv-Issue_ 6.4. Bugs assigned to team
Number in the commit message, the system does not allow to push his code
to a remote repository’’. 25 out of 30 respondents agree with our smell definition. Some of
‘‘Our tracking system automatically fetches the related commit since we the reasons for those who do not agree with this process smell are as
use the ticket ids in our commit descriptions’’. follows ‘‘I believe it is not a bug smell because those kinds of bugs are
‘‘We have imposed that bugs cannot be fixed without a proper commit generally assigned to that particular team by other teams. It’s the team’s
branch, message, and pull request’’ responsibility to assign to a specific member and it might take some time to
We asked practitioners for their expert opinion about the potential do so’’.
root causes of this smell and some of the potential root causes are: ‘‘A bug detected in a component can be assigned to the relevant team
Developer forgets to mention the commit link while fixing the bug due to collective code ownership. The team itself can progress this bug in
(22/30), Committing the link is not straightforward in the BT system the sprint plan. In the sprint plan, a volunteer can take ownership of this
(11/30), and Weak understanding of the BT system (10/30). bug’’.
Some of the potential root causes suggested by respondents in open- ‘‘I believe that a bug might be assigned to the teams (in certain circum-
ended questions are; bug tooling not enforcing it and multiple issues are stances). It might not be a very straightforward task or even people from
resolved at same the time and not all of them are code bugs. different areas in the same team (f/e, b/e, etc.) might have to be involved
in the bug solving process. I don’t see this scenario as a smell. If you consider
6.3. Ignored bugs the accountability, you still assign the ticket to some developer(s). Actually,
even better, you account for more than one developer, the whole team, which
29 out of 30 respondents agree with this process smell. The ones is a safer method. Of course, not all bugs should be assigned to the teams.
who do not agree with the process smell stated ‘‘Bugs might be reported But I think there might be some bugs that can be solved by multiple devs’’.
that have no real impact on the functionality of the software. Such bug The majority of respondents (11/30) said they ‘Never’ encountered
reports should be pruned once in a while’’. this smell during the BT process (Fig. 13). We also asked practitioners
For ignored bugs, we also asked the practitioners about the thresh- if it is true, according to their perception, that bugs that are assigned
old value we are using in our empirical analysis to declare a bug as to a team are not fixed or receive any comments in a short period
an ignored bug. Given the multiple options (3 months, 6 months, 1 resulting in a long time to fix. 10/30 respondents agree with it, and

14
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

10/30 respondents said ‘Maybe’. Regarding actions taken in different ‘‘We try to mention ‘how to reproduce the bug with the description’’
companies for avoiding ignored bugs, 12/30 said ‘Yes’ (Fig. 14), and ‘‘Bug reports are being reviewed’’ ‘‘It is mandatory to add environment
the actions taken are as follows info while posting bugs’’
‘‘Having weekly ticket refining meetings where tickets are assigned to the ‘‘The test tool can automatically collect environment information’’
team members before the start of the sprint planning’’. We asked practitioners for their expert opinion about the potential
‘‘We can’t assign bugs to a team. They need to be assigned to a single root causes of this smell and some of the potential root causes are:
person. Usually that’s the project lead who can then propagate it further to Reporter forgets to mention (21/30), Reporter does not know the
their team members’’ environment details (20/30), and Reporter is short on time to mention
We asked practitioners for their expert opinion about the potential this information (18/30).
root causes of this smell and some of the potential root causes are:
Mistakenly assigned by a triager (11/30), Unavailability of developer 6.8. Missing severity
(18/30), and Expertise of triager while assigning bug reports (system
knowledge) (12/30). 28 out of 30 respondents believe that missing severity in bug reports
corresponds to bad practices in the bug tracking process. Other respon-
6.5. Missing priority dents have mentioned their reasons for not considering it a process
smell, which are as follows ‘‘People give high priority when severity is high.
25 out of 30 practitioners believe that missing priority corresponds Therefore, they end up using only one’’.
to bad practice in the bug tracking process. One of the reasons for not ‘‘We only have ‘‘priority’’ fields in our issues (using Jira, I guess it is
considering it a smell is ‘‘There are times when all of the bugs are declared modifiable), and I have never realized that it is missing until now’’.
high priority. And the developer has to figure out on its own which ones Most of the respondents said they have ‘Rarely’ encountered this
should be fixed on priority’’. smell (Fig. 13). Regarding actions taken in organizations for avoiding
The majority of respondents said they have ‘Rarely’ encountered this missing severity bugs, 11/30 said ‘Yes’ (Fig. 14) and some of those
process smell, shown in Fig. 13. Afterward, we also asked practitioners actions are
if there are any actions taken in their companies to avoid this process ‘‘Bug reports are reviewed and re-prioritized regularly’’.
smell, 14/30 respondents have said ‘Yes’ (Fig. 14). Some of those ‘‘A group of experienced staff comes together to fix the severity of the
actions are mentioned below; open bugs’’.
‘‘We have made it mandatory to assign a priority tag while posting a ‘‘The severity field is mandatory while creating a bug’’
bug’’ We asked practitioners for their expert opinion about the potential
‘‘PMs always assign priorities if missed by the QA engineer’’ root causes of this smell and some of the potential root causes are:
‘‘CCB: Group of leads comes together to prioritize the bugs’’ A large number of bug reports submitted every day increase triagers’
We asked practitioners for their expert opinion about the potential workload (20/30), and Triager overlooks (16/30).
root causes of this smell and some of the potential root causes are:
Expertise or experience of triager, Wrong bug severity (10/30), and 6.9. Reassignment of bug assignee
Overlooked by a triager (15/30).
24 out of 30 respondents believe that reassignment of the bug
6.6. Not referenced duplicates assignee corresponds to bad practices in the bug tracking process. A few
of the reasons for not considering it a smell as stated by a respondent
28 out of 30 respondents believe that this corresponds to bad is ‘‘This becomes a smell if a bug is reassigned more than three times. Before
practices in the bug tracking process. The majority of respondents have that, some organizations have a trend in which all bugs are assigned to the
said they have ‘Rarely’ encountered this process smell (Fig. 13). 10/30 lead first who then reassigns them to the appropriate developer’’
respondents have told us the actions their companies are taking to ‘‘Bug reassignment could be a legit action taken in case of unavailability
avoid this process smell (Fig. 14), some of them are as follows of developer or complex nature of the bug’’.
‘‘Review of bug reports’’. ‘‘My organization actually uses reassignment as a marker of progress (re-
‘‘Use of similar issues finding capability of BTS, mandatory selection of porter assigns to triager, triager assigns to dev, dev assigns to reporter/tester
bugs in case of ‘‘is duplicate’’ link selected’’. after fixing it and so on). Not sure if this is a good way to go about it but
‘‘Grouping the different types of bugs in BTS. So that you would need to so far it seems to work’’
search fewer of the previous bugs to find the duplicate if it exists’’. ‘‘I think there are valid reasons for a bug report to be reassigned, maybe
We asked practitioners for their expert opinion about the potential the initial developer is not experienced enough to solve the issue, or they had
root causes of this smell and some of the potential root causes are: to be assigned to another report or project, and therefore can not continue
Person marking duplicate is new to the system (17/30), Person marking working on this one or the bug is just difficult to solve’’
duplicate is unaware of previous bugs (24/30), and Poor search feature In our survey, we also asked about the threshold for counting this
of bug tracking system to find master bug report (11/30). process smell i.e. how many times the reassignment of bug assignee
Another important potential cause stated by one of the respondents should be considered as a smell. Previously we were considering the
is ‘‘One bug can cause multiple features to crash. Then each feature single reassignment as a process smell, but it was one of the potential
failure can become one separate bug, hence resulting in a duplicate threats to the validity of our analysis. Therefore, we decided to take
bug’’. an expert opinion regarding the threshold value. As per survey results,
13 respondents said the threshold should be twice and 12 respondents
6.7. Missing environment information said it should be thrice. However, respondents also mentioned that the
threshold depends on factors like the size of the team and how the bug
All 30 respondents believe that missing environment information in tracking life cycle is laid out in any organization. As per the majority
bug reports corresponds to bad practices in the bug tracking process. of results, we are selecting ‘Twice’ as a threshold for this process smell.
The majority of respondents (Fig. 13) said they have ‘Rarely’ encoun- Most of the respondents said they have encountered this process smell
tered this smell during the BT process. Afterward, we also asked our ‘Rarely’ (Fig. 13). Regarding actions taken in companies for avoiding
respondents if there are any actions taken in their companies to avoid reassignment of bugs, 6/30 said ‘Yes’ (Fig. 14), and some of those
this process smell, 18/30 respondents have said yes (Fig. 14). Some of actions are
those actions are mentioned below; ‘‘Bugs are reassigned only under very special circumstances’’

15
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

‘‘Discussion in the team of that particular project and assign the bug to Afterward, we also asked our respondents if there are any actions taken
respective developer’’ in their companies to avoid this process smell, 5/30 respondents have
We asked practitioners for their expert opinion about the potential said ‘Yes’ (Fig. 14). Some of those actions are mentioned below;
root causes of this smell and some of the potential root causes are: ‘‘PMs make sure the assigned developers are closing bugs’’
Reassignment of some fields causes other fields to be reassigned (9/30), ‘‘Developers are encouraged to mark the bug as fixed themselves’’
Triager might not know the suitable developer for the bug (22/30), No ‘‘It is a good scenario to assign it to the same person, but in real life, it
developer recommendation system integrated with Bug tracking system is necessary to proceed with redundancy’’.
(15/30), New bug report correction (When a bug report is submitted, We asked practitioners for their expert opinion about the potential
some fields could be wrongly assigned) (11/30), and Admin batch root causes of this smell and some of the potential root causes are:
operations (Administrators also reassign some fields in the bug reports Assignee forgot to close the bug (20/30), Bug was originally resolved by
to better organize the project) (7/30). someone else (16/30), In most big projects, a bug can be closed or re-
One of the respondents also mentioned another potential root cause open later, is reserved for higher-level or administrative roles that only
which is the availability reasons of the developer (the initially assigned the core developers have (11/30), The bug is assigned to the developer
developer had to switch to another task/took a vacation). for resolution by the triager. If the developer is not able to solve it,
he may reassign or toss it to other developers (14/30), and In quest of
6.10. No comment bugs finding the right expertise of developer (5/30).

24 out of 30 respondents believe that the bug reports with no 6.12. Closed reopen ping-pong
comments correspond to bad practice during the bug tracking process.
Some of the reasons for not agreeing with our smell definition are as 22 out of 30 respondents believe that the described scenario about
follows the closed–reopen ping pong of a bug corresponds to bad practice
‘‘Some bugs are trivial and require no comments’’. in the bug tracking process. And the reason for not agreeing to this
‘‘I have rarely encountered a bug report which does not have any process smell is stated by respondents as ‘‘I agree that this is a smell but I
descriptions. Also, I feel like verbose descriptions having extra/unnecessary don’t think this is completely related to BTS. For example, ‘‘Insufficient unit
details are almost always better than having shorter descriptions which are testing’’ is not even in the scope of BTS. In case the bug occurs again, you
missing crucial details’’ would reopen the old one or open a new bug (duplicate). Either way, you
‘‘I think some of that information can be found on the description field say that it is a smell. What should developers do in this case? This happens
— in my experience, the lack of this field causes more issue than not having in real life. Then, re-occurring bugs are always a smell’’.
comments on the report’’ In the survey, we also asked about the threshold for considering
As you can see from Fig. 13, the majority of respondents say they it a smell i.e. do the respondents agree with our assumption that if
encountered this process smell ‘Sometimes’. Then we also asked our the bug is reopened at least once it is a smell, or should we change
respondents if there are any actions taken in their organizations to this threshold. The majority (17/30) of the respondents said once is
avoid this process smell, 9/30 respondents have said ‘Yes’ (Fig. 14). fine while others say once is a very strict threshold, maybe twice is
Some of those actions are mentioned below; better. Some of the other concerns of respondents were ‘‘It depends on
‘‘Developers and QA engineers have regular phone calls’’ the severity of the bug’’
‘‘Comments are mandatory for a developer while marking the bug as ‘‘1 is good and maybe a frequency can be added as a parameter as well.
fixed so that the actual fix can be traced in future and its area of impact be reopened bugs in 3 months for example. Some bugs are difficult to generate
defined’’ by the Devs/QA’’.
‘‘Team members are constantly reminded that comments are to be added The majority of respondents said they ‘Rarely’ encountered this
by day end’’. smell during the BT process (Fig. 13). Regarding actions taken in orga-
We asked practitioners their expert opinion about the potential root nizations for avoiding reassignment of bugs, 10/30 said ‘Yes’ (Fig. 14)
causes of this smell and some of the potential root causes are: Due to and some of those actions are
incorrect severity and priority, the bug is being ignored (14/30), Devel- ‘‘If a bug is reopened, the developer and tester shall come together to
opers/Contributors forgets to comment (17/30), and A vast number of assess the bug’’
bug reports often contain excessive description and comments, which ‘‘Mandatory unit tests that run automatically before each commit, the
may become a burden to the developers (13/30). practice of implementing and updating unit tests, bug reports containing
detailed descriptions, communication between staff’’.
6.11. Non-assignee closer of bug ‘‘If a bug is reopened 2–3 times then a concern is raised so that it does
not happen again’’
19 out of 30 respondents believe that the described scenario about We asked practitioners for their expert opinion about the potential
the non-assignee resolver of a bug corresponds to bad practice in the root causes of this smell and some of the potential root causes are: In-
bug tracking process. The reasons provided by respondents for not sufficient unit testing (26/30), Ambiguous bug specifications (20/30),
agreeing with the smell definition are ‘‘Some teams are so well integrated Changed bug scope (15/30), Poorly/incorrectly fixed bugs (26/30), and
that they can easily take over bugs from other members of the team’’ Tester doing a really bad job (If the bug was not fixed — they should
‘‘Some companies work that way, sometimes due to lots of new hires’’ not close it but reactivate it back to the developer) (16/30).
‘‘It’s OK if that someone who closed the bug is the team lead, mentor, or
manager of the developers. Sometimes, in the meeting, the team lead shares 7. Hypothesis testing (RQ4)
the screen and does these actions in front of the whole team. Also, ‘‘The bug
is assigned to the developer for resolution by the triager. If the developer is To statistically analyze the impact of process smells on time to
not able to solve it, he may reassign or toss it to other developers’’: In this resolution (TTR) and bug reopening count, we propose to use statistical
case, s/he would assign it to someone else instead of closing it’’. tests. The reason we choose TTR and reopening count for statistical
‘‘Bugs can be resolved by the QA engineer or the developer. I don’t think impact analysis out of all other factors is these two parameters can
this is an indicator of bad practice in a company but could be considered as easily be quantified from the dataset we have. For TTR, we calculated
such in open source projects’’. the total time (in seconds) from where the bug was opened to the time
Fig. 13 shows that majority of respondents have said they have it was resolved and closed. Similarly, for the reopening count, we are
encountered no comment bugs ‘Sometimes’ during the BT process. counting how many times the bug is reopened by counting the number

16
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Table 6
P-values for statistical tests of TTR for eight projects. Expected Results (Green Color), Unexpected Results (Yellow Color).
Confluence Jira Server GCC Evergreen MongoDB Server Wireshark ShenYu YuniKorn
Assigned to a Team NaN NaN NaN 1.40e−15 5.46e−258 NaN NaN NaN
Ignored 6.27e−17 3.09e−08 0.00e+00 2.52e−05 2.10e−122 0.00e+00 1.45e−08 1.65e−03
Missing Environment 1.00e+00 1.00e+00 NaN 1.00e+00 1.00e+00 NaN 9.37e−01 4.40e−01
Missing Priority NaN NaN NaN NaN 6.16e−01 NaN NaN NaN
Missing Severity 1.00e+00 1.00e+00 NaN NaN NaN NaN NaN NaN
No Comment 1.00e+00 4.68e−98 9.94e−01 9.99e−01 1.67e−01 9.99e−01 1.00e+00 9.46e−01
No Link to Commit NaN NaN 2.91e−15 7.98e−01 1.43e−01 9.97e−01 1.06e−03 NaN
Not Referenced Duplicate 9.74e−01 9.44e−01 NaN 3.65e−01 9.63e−01 NaN 5.00e−01 2.16e−01
Reassignment 1.06e−27 2.41e−39 2.15e−62 4.53e−56 0.00e+00 5.62e−02 NaN 5.96e−03
Reopen Ping Pong 9.93e−97 4.54e−24 1.00e+00 1.09e−09 1.33e−81 7.67e−05 9.06e−02 5.97e−03
Resolver is not Assignee 2.25e−208 2.60e−67 5.98e−135 3.82e−14 1.82e−221 5.27e−02 8.93e−01 7.27e−01
Unassigned 1.00e+00 5.32e−01 1.17e−149 9.98e−01 8.35e−02 9.99e−01 3.32e−01 4.68e−02

Table 7
P-values for statistical tests of reopen count for eight projects. Expected Results (Green Color), Unexpected Results (Yellow Color).
Confluence Jira Server GCC Evergreen MongoDB Server Wireshark ShenYu YuniKorn
Assigned to a Team NaN NaN NaN 9.99e−01 9.95e−01 NaN NaN NaN
Ignored NaN NaN NaN NaN NaN NaN NaN NaN
Missing Environment 9.99e−01 9.99e−01 NaN 9.38e−01 9.99e−01 NaN 2.24e−01 7.40e−01
Missing Priority NaN NaN NaN NaN 6.17e−01 NaN NaN NaN
Missing Severity 4.38e−01 2.72e−06 NaN NaN NaN NaN NaN NaN
No Comment 1.00e+00 1.00e+00 7.26e−01 9.98e−01 1.00e+00 8.53e−01 4.06e−01 6.40e−01
No Link to Commit NaN NaN 9.99e−01 8.90e−01 1.92e−03 5.23e−02 9.11e−01 NaN
Not Referenced Duplicate 2.61e−01 6.87e−01 NaN 7.75e−01 6.07e−01 NaN 1.00e+00 1.00e+00
Reassignment 5.09e−125 7.89e−97 7.65e−28 5.08e−12 3.54e−39 4.51e−06 NaN 3.83e−15
Reopen Ping Pong 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00 1.18e−62 9.24e−64
Resolver is not Assignee 3.76e−38 3.23e−16 4.21e−30 1.09e−01 2.11e−10 6.44e−01 2.00e−01 2.15e−01
Unassigned 1.00 9.99e−01 9.99e−01 7.30e−01 9.91e−01 9.81e−01 9.58e−01 5.89e−01

of reopened states of the bug during its life cycle. As per our analysis, The effect of process smells on the resolution time of the bug is
some process smells affect the TTR of the bug. Hypothetically, the statistically significant as we can observe it from the p-values obtained
bugs with process smells are likely to have greater resolution time and in Table 6. However, we also got some unexpected results for some of
more reopens as compared to the bugs which have no process smells. the process smells and projects. For ‘Missing Environment Information’,
However, some of the process smells have no effect on the resolution ‘Missing Priority’, and ‘Missing Severity’ the p-values are highlighted
and reopening of the bug. with yellow color which means we get unexpected p-values. The TTR
To statistically analyze the impact of process smells on TTR and should be higher if severity or environment information is missing but
reopen count of bugs, we used Mann–Whitney U Test. We used a the p-values we obtained are higher than 0.05 which means we cannot
non-parametric statistical test because when we visualized our data reject the 𝐻0𝑎 . However, if we look at the ‘Missing Environment Smell’
(using box plot), we observed it is not normally distributed and we ratio in Fig. 12, it is very low as compared to other smells. Therefore,
are comparing two independent groups i.e. bugs with process smell we could not say that effect does not exist rather the strength of the
and bugs with no process smell. We tested each alternative hypothesis parameter could decrease. The same is the case with the bugs with no
proposed for different BT process smells by setting the significance priority, the priority ratio is less than 1% for the MongoDB project.
level, 𝛼 = 0.05 (with a level of 95% confidence). For the bugs that do not have severity information, they could possibly
In Tables 6 and 7, the p-values for statistical analysis of TTR and take a longer time to resolve as the developer would not have any
reopen count of bugs having process smells are given respectively. The information about severity. If we observe the p-values for ‘Unassigned
coloring scheme is used to show the expected and unexpected statistical Bugs’ of all projects except for GCC and YuniKorn, they are greater
results. The green color is used to show the expected results (the results than 0.05 which means we cannot reject the 𝐻0𝑎 . Hypothetically,
that are in accordance with our hypothesis) while the yellow color is unassigned bugs should have higher TTR. However, it could be a
representing the unexpected statistical results. NaN indicates that for possibility that sometimes the bugs are of trivial nature and they are
this particular process smell we cannot run our statistical test because resolved immediately without assigning to any developer.
it cannot be compared i.e. either all bugs have smells or none of the For process smells like ‘No Link to Bug-Fixing Commit’ and ‘Not
bugs has that particular smell. Referenced Duplicates’, the TTR should not likely be affected. These
process smells have no potential impact on the resolution of bugs.
7.1. TTR vs process smells Therefore, p-values for these two smell categories are higher than 0.05
showing that we are ‘Accepting’ our null hypothesis (𝐻0𝑎 ) i.e. these two
To analyze the effect of process smells on the TTR, we first calcu- process smells have no effect on TTR. For the GCC project, the ‘No Link
lated the required parameter against each bug. For TTR, we calculated to Commit’ has an exceptionally high ratio of almost 90% (Fig. 12), it
the time of resolution of the bug (in seconds) i.e. how long did the might be the reason for an unexpected statistical value.
bug take from opening to closing. Afterward, we defined our null and For all other smells like ‘Bugs Assigned to Team’, ‘Ignored Bugs’,
alternative hypotheses. Considering the process smells’ definitions and ‘Reassignment of Bug Assignee’, ‘Non-Assignee Resolver’, and ‘Closed–
their potential impacts on the software development process, some of Reopen Ping-Pong’ ideally, TTR is likely to be greater and the p-values
the process smells are likely to have no effect on TTR. Afterward, we obtained are all less than 0.05 which means we can reject the 𝐻0𝑎 .
proposed null and alternative hypotheses for each process smell which They have statistically significant results with no exceptions showing
are as follows; that bugs having these process smells are likely to have higher TTR.
𝐻0𝑎 = Bugs with ⟨𝐬𝐦𝐞𝐥𝐥 𝐭𝐲𝐩𝐞⟩ have no effect on TTR For ‘No Comment Bugs’ the TTR is not likely to be greater. As no
𝐻1𝑎 = Bugs with ⟨𝐬𝐦𝐞𝐥𝐥 𝐭𝐲𝐩𝐞⟩ have greater TTR comments mean less discussion and quick bug fixing implying that the

17
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

null hypothesis (𝐻0𝑎 ) should be accepted and the p-values obtained are marked as having a missing severity smell. Whereas, if any organization
all greater than 0.05 except for the Jira Server project. is using Bugzilla as a BT tool they have a pre-defined set of bug fields.
In Bugzilla, severity is a mandatory field. Therefore, the process smell
7.2. Reopen count vs process smells would not occur. In short, the capabilities and the default settings of a
BT tool is one of the important factors causing process smells.
BT process smells could potentially have an effect on the reopening Secondly, organizational practices can be a potential factor in the
of the bugs. To analyze whether this effect is statistically significant or occurrence of BT process smells as organizations follow their own
not we have done statistical analysis using the reopen counts of bugs set of rules and best practice for BT process. For example, in some
with process smells and bugs with no process smells. For the reopening organizations bugs are marked resolved by team leads only, while in
count, we are counting how many times the bug is being reopened any other organization anyone who is working on the bug can mark
during its life cycle for each bug. Afterward, we defined our null and the bug as resolved/closed. As in our survey, one of the respondents
alternative hypotheses. The null and alternative hypotheses statements writes in an open-ended question; ‘‘some organizations have a trend in
are which all bugs are assigned to the lead first who then reassigns them to the
𝐻0𝑏 = Bugs with ⟨𝐬𝐦𝐞𝐥𝐥 𝐭𝐲𝐩𝐞⟩ have no effect on reopen count appropriate developer’’ (which causes reassignment of bugs). Moreover,
𝐻1𝑏 = Bugs with ⟨𝐬𝐦𝐞𝐥𝐥 𝐭𝐲𝐩𝐞⟩ have greater reopen count
organizations have defined their own bug states, state transitions, and
For ‘Ignored Bugs’, we cannot run a statistical test for reopening
paths that a bug should follow as per their requirements. Therefore,
counts because, by definition, ignored bugs are the bugs that are never
organizational practices being followed during project development
resolved. Therefore, we cannot consider reopens in this scenario and we
could be a cause of the occurrence of BT process smells in these
are representing it as NaN (Table 7). ‘No Link to Bug-Fixing Commit’,
projects.
‘Not Referenced Duplicates’, ‘Missing Priority’, and ‘Missing Severity’
Thirdly, the difference workflows of bug states among the projects
process smells do not affect the reopen count of the bug i.e. 𝐻0𝑏
could be an important factor. During our analysis, we have seen differ-
should be accepted for these process smells. The 𝑝-value of ‘No Link
to Bug-Fixing Commits’ for the MongoDB and Wireshark projects are ent state diagrams of the different projects were possible even if the
unexpectedly less which means we have to reject the 𝐻0𝑏 . Similarly, projects are using the same BT tool. We have observed the unusual
for ‘Missing Severity’ the p-values are less than 0.05 which means we transitions from one state to another state (for instance verified to open)
have to reject the 𝐻0𝑏 . during the bug life cycle. Even the projects that were using Jira had
For process smells like ‘Unassigned Bugs’, ‘Bugs Assigned to a their state diagrams very different from each other. One reason could
Team’, ‘Reassignment of Bug Assignee’, ‘No Comments’, ‘Non-Assignee be that there is no fixed defined path (or transition from one state to
Resolver’, and ‘Closed–Reopen Ping-Pong’ the reopen counts are likely another) that a bug should follow during its life cycle.
to be greater i.e. 𝐻0𝑏 should be rejected. The bugs having these process Lastly, we intended to compare the projects in the maintenance
smells have higher chances of getting reopened at any later time. phase and newly developed projects (ShenYu and YuniKorn projects).
The statistical results in Table 7 show that some of the p-values we However, we did not observe any significant difference among the
obtained are unexpected i.e. greater than 0.05 and we failed to reject projects. The process smells ratio in the ShenYu project is relatively
the null hypothesis. For ‘Unassigned Bugs’ and ‘No Comment Bugs’ higher (e.g. for missing priority, missing severity, missing environment
p-values we obtained for all projects are greater than 0.05 and are information). However, we believe that is because of the difference in
failed to support our alternative hypothesis. Unassigned bugs might be the BT tool i.e. Github Issues as previously discussed .
the ones that are of trivial nature and are resolved straightforwardly
without being assigned to any developer and such bugs do not need 8.2. Implications for researchers
reopens. The p-values for process smell ‘Non-Assignee Resolver’ are
unexpectedly greater for Evergreen, ShenYu, YuniKorn, and Wireshark
We believe that this study has several implications for the re-
projects failing to reject the null hypothesis.
searchers:
‘Not Referenced Duplicates’ does not cause the reopening of the bug.
Therefore, we should accept 𝐻0𝑏 in this case. The p-values obtained are
• We created a taxonomy and common terminology for bug track-
greater than 0.05 which means our finding is statistically significant.
ing process smells, making it easier for researchers to share
Statistical analysis for reopening counts might not be very reliable.
information.
As in many projects, there is a possibility that instead of reopening the
• The taxonomy, as well as the overall classification process, aids
same bug again and again, a new bug is being opened. Therefore, we
in the identification of knowledge gaps in bug tracking process
cannot come up with a conclusion here.
research. Future studies could be focused on quantifying the
impact of these bug tracking process smells on productivity or
8. Discussion
rework.
8.1. Context dependency of smells • We expect the bug tracking process smell taxonomy, like all other
SE taxonomies, to grow over time, absorbing new knowledge.
The occurrence of BT process smells depends on various factors. The bug tracking process smells are also linked to the software
In this section, we are going to discuss all those factors that play development waste [67]. We intend to investigate the impact of
an important role in the BT process smells’ occurrence. Firstly, the bug tracking process smells on software development waste in the
difference among the BTS can be one of the major reasons for having future.
process smells in any project as different organizations use different BT • The original taxonomy can be used as a foundation for developing
tools and every tool provides its own features. There are many well- (semi) automatic recommendation systems that mine software
known BT tools available such as Jira, Bugzilla, Github Issues, etc. Each repositories to find bug tracking process smells. These techniques
BT tool differs from the other. For example, Jira offers customization are not just for detecting BT process smells; they may also be
of fields i.e. users can define/add custom fields according to their used to identify problematic practices in other stages of software
requirements for any project. They can make any field mandatory and development, such as the bug life cycle, testing, and continuous
delete any field as per their requirement. For example, the severity field integration. Detecting improper practices at various stages can
is not defined in Jira by default. Therefore, if the user does not add the significantly improve the quality of the software development
severity field while using Jira as their BT tool, all of the bugs will be process.

18
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

8.3. Implications for practitioners not provide a severity field by default. Despite the lack of severity
fields, Jira still provides mechanisms to add custom fields. It is the
Software development relies heavily on BT process. In order to fix project members’ responsibility to customize the tool. Since severity is
bugs more efficiently, the BT process should be effective. While fixing an important property of a bug, not tracking this property should still
the bug, the first most important thing is to report the bug effectively. be counted as a process smell.
Writing a good and effective bug report is a skill. Moreover, according Configurable Thresholds: Within the smell detection methods, we
to the survey results, the bug tracking process smells we proposed in made a few assumptions on the configurable parameters and definitions
our taxonomy are key acts that should be avoided in order to improve (e.g. ignored bugs time duration is six months and as for reassignment
the bug tracking process. Furthermore, the results show that all of the of bug, if a bug is reassigned more than once). These thresholds are
BT process smells included in our taxonomy exist in various ratios. For subject to discussion and could be configured depending on the project.
a better bug tracking process and BTS, we have listed some potential Furthermore, we surveyed experienced software practitioners for expert
practical advice for practitioners. opinions about the thresholds and definitions of our proposed BT
process smells and adjust our thresholds based on survey results.
• Practitioners can utilize the proposed taxonomy to prepare BT Practitioner’s Survey: We also discovered a number of threats as-
guidelines (or they can be updated if they already exist) to sociated with our study’s survey. One of the possible threats to the
improve the effectiveness of the BT process. This type of a guide legitimacy of the survey is the smell definitions may be misinterpreted
will eliminate confusion, ensuring that everyone knows who is by the respondents. To address these concerns, a full description of
responsible for what, and ensure that everyone is aware of the each process smell is provided, along with a real-life example. The
procedures to follow. A bug should be required to follow a set respondents were instructed to contact the authors if they had any
path from step to step as it progresses through the organization. questions about the survey. We performed a pilot test questionnaire
The existence of such guidelines may not ensure the elimination before sending the survey to the software practitioners and clarified the
of all smells, but it can help to reduce the percentages of process items that could potentially pose an issue. Many respondents effectively
smells. supplied many responses in one response to the open-ended questions.
• When utilizing BTS, one of the first things practitioners need to When asked about the actions taken in their companies for avoiding
do is setting the field templates. It is possible to achieve this by process smells, they mentioned some of the actions. All of the comments
making key bug report fields obligatory (e.g. assignee, severity, provided by the participants were taken into consideration during the
priority, environment information). This method can help prevent survey synthesis in order to stay faithful to their feedback. We are
many process smells. sharing the responses in Section 6. The survey was sent out to the
• Professional BTS should be used, and newcomers should be authors’ network and was aimed at those who actively use or have
trained on how to utilize a BTS efficiently. Workshops on the BT expertise with bug tracking systems. Despite the fact that the survey
process should be held, and practitioners should be instructed on participants come from different companies and have different job
how to use BTS and how to prevent the smells associated with titles, general conclusions cannot be drawn. As a result, we cannot
the BT process. assume that the findings will apply to all software product systems that
• Moreover, practitioners can enhance their bug tracking by intro- use bug tracking systems. The survey’s sample is a convenience sample
ducing smart tooling for bug tracking. For example, if the priority that was sent to the authors’ network, as previously stated. Also, the
or severity is not set, a warning is raised for that particular bug. survey is a bit longer and takes around 45–50 mins.
In case of ignored bugs, the tool reminds the developers through To improve the study’s replicability, we share the survey questions
emails/slack notifications that the bug is unattended. and answers,15 the dataset that we used to mine BT process smells, and
the source code.16
9. Threats to validity
9.2. Construct validity
Although our analysis provides some useful and interesting results,
there are a number of threats to our analysis’s validity that must be Construct validity show how well the outcomes on the instrument
are indicative of the theoretical concept and how well the measure
examined.
‘behaves’ in a way that is consistent with theoretical hypotheses. We
described two measures in our research, TTR which indicates the
9.1. Internal validity
resolution time of the bug and reopen count which shows how many
times the bug is reopened during its life cycle. For TTR, we intend to
There are four major risks to our analysis’s internal validity.
measure the time a bug takes to resolve. Therefore, we calculated the
Organizational Practices: As different organizations follow their own
time from where the bug is opened till the time when the bug is closed.
set of rules and best practices, a few of our process smells (Bugs
Similarly, for reopen counts, we counted the number of times the bug
Assigned to Team and Non-Assignee Bug Resolver) can be subject to
was in the reopened state during its life cycle. Despite the fact that these
discussion. For example, if bugs are marked resolved by only the team
assumptions are fair, these measures may not accurately reflect the
leads in a project, we may not claim that a non-assignee resolver of
intended measure. Moreover, reopening counts might not be reliable.
bug smell exists. Because it is not a mismatch between the assignee
and the resolving person due to organizational discrepancies, it is 9.3. External validity
simply the organizational rule. These kinds of smells’ detection methods
should be adapted to organizations and systems. Regarding the Missing External validity threats are a concern with respect to the general-
Environment Information smell, bug reporters may prefer indicating the ization of results, i.e. to which extent we can generalize our results. To
environment information in the bug description and leaving the corre- mitigate these threats we conducted our study on eight projects from
sponding fields blank. We count such a case as missing environment three different tools, i.e. Jira, Bugzilla, and GitHub Issues. To improve
information, although it may not be, since that information might be the generalizability, we are planning to conduct our study on more
mentioned inside the bug description. However, we believe this is projects and tools in the future.
still a bad practice as it prevents people from searching and filtering
environment information.
Tool Dependency: For the projects that use Jira, one might argue 15
https://figshare.com/s/c493b64e4f1b5ef4337f
16
that the Missing Severity smell should not be counted since Jira does https://figshare.com/s/ca6e8ac4146b9d4e1cac

19
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Table 8
Summary of MLR results (Part 1).
Source Type Unassigned No link to Ignored Bugs Missing Not
bugs commit bugs assigned priority referenced
to team duplicates
An overview of the software engineering process and tools in the Mozilla White ✓
project [40]
Who should fix this bug? [44] White ✓
How to write a good bug report? [42] Gray ✓ ✓
How to Write a Bug Report [68] White ✓
The missing links: bugs and bug-fix commits [45] White ✓
An empirical study on factors impacting bug fixing time [61] White ✓
How to Write a Bug Report: The Ideal Bug Report [69] Gray ✓
Towards automated anomaly report assignment in large complex systems White ✓
using stacked generalization [51]
Increasing anomaly handling efficiency in large organizations using applied White ✓
machine learning [50]
Defect prioritization in the software industry: challenges and opportunities White ✓
[15]
The Anatomy Of a Good Bug Report [53] Gray ✓
10 Tips of Writing effective Bug Reports [52] Gray ✓
Detecting duplicate bug reports with software engineering domain knowledge White ✓
[55]
Characterizing bug workflows in mozilla firefox [48] White ✓
How long will it take to fix this bug?[2] White ✓
Towards more accurate severity prediction and fixer recommendation of White ✓
software bugs [14]
Defect Management:4 Steps to Better Products & Processes [70] Gray ✓
Prioritizing warning categories by analyzing software history [71] White ✓
Supporting change request assignment in open source development [72] White ✓
Determining implementation expertise from bug reports [73] White ✓
Reducing the effort of bug report triage: Recommenders for White ✓
development-oriented decisions [74]
How to write a bug report that will make your engineers love you [75] Gray ✓
Selecting discriminating terms for bug assignment: A formal analysis [41] White ✓
Automatic software bug triage system (BTS) based on Latent Semantic White ✓
Indexing and Support Vector Machine [19]
Improving bug triage with bug tossing graphs [20] White ✓
Which warnings should I fix first? [76] White ✓
Z-Ranking: Using statistical analysis to counter the impact of static analysis White ✓
approximations [77]
Duplicate bug reports considered harmful. . . really? [57] White ✓
8 steps for better issue management [43] Gray ✓
Best Practices for Effective Defect Tracking [58] Gray ✓
10 tips of writing efficient defect report [52] Gray ✓
Defect tracking best practices [47] Gray ✓ ✓
The Good, The Bad and The Ugly Bug: Bug Tracking Best Practices [78] Gray ✓
Bug tracking best practices guide [79] Gray ✓ ✓
How to write a good bug report? Tips And Tricks [80] Gray ✓ ✓
Best practices: Defect reporting techniques [81] Gray ✓
How to write a good bug report: step-by-step instructions [82] Gray ✓
How to write a bug report that will make your developers happy [83] Gray ✓
Issue Writing Guidelines [84] Gray ✓
How to write a good bug report? Tips and Tricks [85] Gray ✓ ✓

9.4. Conclusion validity that contradict the ideal process definitions. Gupta and Sureka [91,92]
put forward a framework called Nirikshan that will help to observe any
The validity of a conclusion is essential whether or not the link is anomalies between the run time process model (real-life model) and
logical in light of the data. In our study, we conclude that BT process the design-time model (ideal model) within the bug life cycle of an
smells statistically affect the resolution time and reopening count of OSS project. In their followup study, Gupta et al. [93], the Chromium
bug. However, our statistical analysis might not be reliable considering project’s bug life cycle is elaborated by mining issue tracking, peer
the project diversity and organizational variations. code review, and version control systems. In their work, they mainly
defined the anti-patterns as loops among states in the bug tracking
history which is similar to the ‘Closed Reopen Ping-Pong smell’ that
10. Related work
we defined in this study.
Additionally, certain deviations from the optimal method and bot-
In the literature, a number of studies talked about the anti-patterns tlenecks are detected and diagnosed during the life cycle. An interactive
in the bug tracking process smells, however the majority of the studies approach was introduced by Knab et al. [94] for visualizing the patterns
focused on a single anti-pattern.To the best of our knowledge, our study of process life-cycle and effort estimations using event logs and bug
is the first study to systematically define and quantitatively analyze the reports in BTS to detect flaws, outliers, and other interesting properties.
bug tracking process smells. In the section, we summarize the related These studies are visualizing the inconsistencies like bug reopens dur-
work and refer the reader to the Appendix for a more comprehensive ing the real-time process flow versus ideal process flow using process
survey of the existing work on similar studies. mapping tools and event logs. Halverson et al. [95] also worked on the
The vast number of process logs from real-life software projects visualization of state changes and unveil the problematic bug patterns
recently have enabled mining these processes to find cases in reality like multiple reopen/resolve cycles. Similar to two these studies, we are

20
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

Table 9
Summary of MLR results (Part 2).
Source Type Missing Missing Reassignment No Non assignee Closed
environment severity of bug bug comment resolver of reopen
info assignee bugs bugs ping-pong
Who should fix this bug? [44] White ✓
How to write a good bug report? [42] Gray ✓ ✓
How to Write a Bug Report [68] Gray ✓
How to Write a Bug Report: The Ideal Bug Report [69] Gray ✓ ✓
The anatomy of a good bug report [53] Gray ✓ ✓
Predicting eclipse bug lifetimes [86] White ✓
Improving Bug Tracking Systems [2] White ✓
Best Practices for Effective Defect Tracking [58] Gray ✓
How to write an effective bug report that actually gets resolved Gray ✓ ✓
(and why everyone should) [87]
What makes a good bug report?[3] White ✓
Predicting the severity of a reported bug [10] White ✓
Prediction of defect severity by mining software project reports [11] White ✓
Towards more accurate severity prediction and fixer White ✓
recommendation of software bugs [14]
Automatic categorization of bug reports using latent Dirichlet White ✓
allocation [88]
Writing Good Bug Report [59] Gray ✓
Improving the readability of defect reports [1] White ✓
Predicting re-opened bugs: A case study on the eclipse project [63] White ✓
Reopening Issues After They Have Been Resolved Or Closed [87] Gray ✓
Guidelines for evaluating bug-assignment research [89] Gray ✓
Effective bug triage—a framework [90] Gray ✓
An empirical study of bug report field reassignment [28] Gray ✓
Detection of duplicate defect reports using natural language Gray ✓
processing [60]
Automatically predicting bug severity early in the development Gray ✓
process [9]
How to write a bug report that will make your engineers love you Gray ✓ ✓
[75]
Improving bug triage with bug tossing graphs [20] White ✓
8 steps for better issue management [43] Gray ✓
Best practices for effective defect tracking [58] Gray ✓ ✓
10 Tips of Writing Efficient Defect Report [52] Gray ✓
Bug Tracking Best Practices Guide [79] Gray ✓ ✓
How To Write A Good Bug Report? Tips And Tricks [80] Gray ✓ ✓
Best Practices: Defect reporting techniques [81] Gray ✓
How to write a good bug report: step-by-step instructions [82] Gray ✓ ✓
How to write a bug report that will make your developers happy Gray ✓ ✓
[83]
Issue Writing Guidelines [84] Gray ✓
How to write a good bug report? Tips and Tricks [85] Gray ✓ ✓

also considering reopened bugs as a ‘Closed Reopen Ping-Pong smell’ Recently, Dogan & Tuzun [98] used a similar approach on the code
process smell in our BT tracking taxonomy. review process and provided the taxonomy of code review process
D’Ambros et al. [96] used data from a release history database smells. Similar to the bug tracking process smells, these code review
for the visualization of a bug’s life cycle. Their tool focuses on the process smells could have negative impacts on software productivity.
activity, time, and priority/severity features of the bug. The advantage
of such a representation is the ease with which you can switch between 11. Conclusion and future work
an overview and a thorough examination of a single issue. Through
examination of issues, we can analyze the occurrence of the ‘Missing Based on the results of an MLR, we proposed a taxonomy of 12
Priority’ and ‘Missing Severity’ process smells. process smells in the bug tracking process. To observe their presence
Other methodologies focus on the quality of bug reports, in practice, we conducted an empirical evaluation of BT process smells
Ko et al. [97] provided finer tool support for bug reporting. They by mining bug reports of eight projects from Jira, Bugzilla, and GitHub
analyzed the linguistic attributes of the bug report descriptions for the Issues (GCC, Wireshark, Jira Server, Confluence, MongoDB Core Server,
improvement of bug reports’ quality. Hooimeijer et al. [62] presume Evergreen, Apache ShenYu, and Apache YuniKorn). We conducted a
survey with software practitioners to get experts’ opinions about the
that high-quality bug reports are addressed faster as compared to
BT process smells. We also run a statistical test to analyze whether the
low-quality bug reports. Based on this presumption they anticipate if
impacts of BT process smells on TTR and reopen counts are statistically
a bug is resolved in a specified time by using different bug report
significant or not. We can summarize the main contributions of our
characteristics, e.g., submitter reputation, readability, severity, etc.
study as follows:
Considering the findings of this study, we can conclude that the BT
process smells we proposed in our taxonomy decreases the quality of • Proposed a novel taxonomy of BT process smells (Table 2), based
bug reports. on a multivocal literature review.
‘Who should fix this bug’ by Anvik et al. [44] employ data mining • Performed an empirical analysis with eight OSS projects to
techniques to propose prospective developers to whom a bug should demonstrate that all the process smells occur in software bug
be assigned. The same authors discuss the issues that arise when using repositories with varying ratios.
open bug repositories, such as irrelevant and duplication [73] similar • Observed that over time, the occurrence of some specific BT
to the ‘Not referenced Duplicates’ smell. process smells in software projects is decreased. The reason for

21
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

this improvement might be associated with the advancements in [8] A.T. Nguyen, T.T. Nguyen, T.N. Nguyen, D. Lo, C. Sun, Duplicate bug report
BT tools and improved best practices for the BT process. detection with a combination of information retrieval and topic modeling, in:
Proceedings of the 27th IEEE/ACM International Conference on Automated
• In our study, domain experts, such as software practitioners,
Software Engineering - ASE 2012, ACM Press, New York, New York, USA, 2012,
are consulted to get their opinion on the taxonomy of BT pro- p. 70, http://dx.doi.org/10.1145/2351676.2351687, http://dl.acm.org/citation.
cess smell. The majority of software practitioners agree with the cfm?doid=2351676.2351687.
proposed process smells. [9] J. Arokiam, J.S. Bradbury, Automatically predicting bug severity early in the
development process, in: Proceedings of the ACM/IEEE 42nd International Con-
• Analyzed the statistical impact of process smells on bug tracking
ference on Software Engineering: New Ideas and Emerging Results, ACM, New
quality measures like TTR and reopen count of bugs. We observed York, NY, USA, 2020, pp. 17–20, http://dx.doi.org/10.1145/3377816.3381738,
that process smells have a statistically significant impact on these https://dl.acm.org/doi/10.1145/3377816.3381738.
quality metrics. [10] A. Lamkanfi, S. Demeyer, E. Giger, B. Goethals, Predicting the severity of
a reported bug, in: 2010 7th IEEE Working Conference on Mining Software
The implications of our study for researchers and practitioners are Repositories, MSR 2010, IEEE, 2010, pp. 1–10, http://dx.doi.org/10.1109/MSR.
2010.5463284, http://ieeexplore.ieee.org/document/5463284/.
three-fold. First, our proposed taxonomy can be used as a baseline to
[11] R. Jindal, R. Malhotra, A. Jain, Prediction of defect severity by mining software
be extended by researchers. Second, for practitioners, the BT process project reports, Int. J. Syst. Assur. Eng. Manag. 8 (2) (2017) 334–351, http://dx.
can be enhanced by introducing suitable tooling for an improved BT doi.org/10.1007/s13198-016-0438-y, http://link.springer.com/10.1007/s13198-
process. Finally, our proposed taxonomy could be used for developing 016-0438-y.
automated (or semi-automated) BT process smell detection systems by [12] A. Lamkanfi, S. Demeyer, Q.D. Soetens, T. Verdonck, Comparing mining algo-
rithms for predicting the severity of a reported bug, in: 2011 15th European
mining software repositories. Conference on Software Maintenance and Reengineering, IEEE, 2011, pp.
As future work, our taxonomy can be expanded to include more 249–258, http://dx.doi.org/10.1109/CSMR.2011.31, http://ieeexplore.ieee.org/
process smells and their empirical analysis. We can also expand this document/5741332/.
study to perform a similar empirical evaluation on a more diverse set [13] T. Menzies, A. Marcus, Automated severity assessment of software defect reports,
in: 2008 IEEE International Conference on Software Maintenance, IEEE, 2008,
of BT tools. Another future direction is the implementation of some pp. 346–355, http://dx.doi.org/10.1109/ICSM.2008.4658083, http://ieeexplore.
practical tools to detect these BT process smells. ieee.org/document/4658083/.
[14] T. Zhang, J. Chen, G. Yang, B. Lee, X. Luo, Towards more accurate
severity prediction and fixer recommendation of software bugs, J. Syst.
CRediT authorship contribution statement
Softw. 117 (2016) 166–184, http://dx.doi.org/10.1016/j.jss.2016.02.034, https:
//linkinghub.elsevier.com/retrieve/pii/S0164121216000765.
Khushbakht Ali Qamar: Conceptualization, Methodology, Writing [15] N. Kaushik, M. Amoui, L. Tahvildari, W. Liu, S. Li, Defect prioritization in the
– original draft. Emre Sülün: Software, Investigation, Data Curation, software industry: Challenges and opportunities, in: 2013 IEEE Sixth Interna-
tional Conference on Software Testing, Verification and Validation, IEEE, 2013,
Writing – original draft. Eray Tüzün: Supervision, Validation, Writing pp. 70–73, http://dx.doi.org/10.1109/ICST.2013.40, http://ieeexplore.ieee.org/
– reviewing and editing, Project Administration, Funding Acquisition. document/6569718/.
[16] D. Kim, J. Nam, J. Song, S. Kim, Automatic patch generation learned from
human-written patches, in: 2013 35th International Conference on Software
Appendix Engineering, ICSE, IEEE, 2013, pp. 802–811, http://dx.doi.org/10.1109/ICSE.
2013.6606626, http://ieeexplore.ieee.org/document/6606626/.
See Tables 8 and 9. [17] X. Xia, D. Lo, E. Shihab, X. Wang, B. Zhou, Automatic, high accuracy predic-
tion of reopened bugs, Autom. Softw. Eng. 22 (1) (2015) 75–109, http://dx.
doi.org/10.1007/s10515-014-0162-2, http://link.springer.com/10.1007/s10515-
014-0162-2.
References [18] H. Valdivia-Garcia, E. Shihab, M. Nagappan, Characterizing and predicting block-
ing bugs in open source projects, J. Syst. Softw. 143 (2018) 44–58, http://dx.
doi.org/10.1016/j.jss.2018.03.053, https://linkinghub.elsevier.com/retrieve/pii/
[1] B. Dit, A. Marcus, Improving the readability of defect reports, in: Proceedings
S0164121218300530.
of the 2008 International Workshop on Recommendation Systems for Software
[19] S.N. Ahsan, J. Ferzund, F. Wotawa, Automatic software bug triage system (BTS)
Engineering - RSSE ’08, ACM Press, New York, New York, USA, 2008, p.
based on latent semantic indexing and support vector machine, in: 2009 Fourth
47, http://dx.doi.org/10.1145/1454247.1454265, http://portal.acm.org/citation.
International Conference on Software Engineering Advances, IEEE, 2009, pp.
cfm?doid=1454247.1454265.
216–221, http://dx.doi.org/10.1109/ICSEA.2009.92, http://ieeexplore.ieee.org/
[2] T. Zimmermann, R. Premraj, J. Sillito, S. Breu, Improving bug tracking document/5298419/.
systems, in: 2009 31st International Conference on Software Engineering - [20] G. Jeong, S. Kim, T. Zimmermann, Improving bug triage with bug tossing graphs,
Companion Volume, IEEE, 2009, pp. 247–250, http://dx.doi.org/10.1109/ICSE- in: Proceedings of the 7th Joint Meeting of the European Software Engineering
COMPANION.2009.5070993, http://ieeexplore.ieee.org/document/5070993/. Conference and the ACM SIGSOFT Symposium on the Foundations of Software
[3] T. Zimmermann, R. Premraj, N. Bettenburg, S. Just, A. Schroter, C. Engineering, ESEC/FSE ’09, Association for Computing Machinery, New York,
Weiss, What makes a good bug report? IEEE Trans. Softw. Eng. 36 (5) NY, USA, 2009, pp. 111–120, http://dx.doi.org/10.1145/1595696.1595715.
(2010) 618–643, http://dx.doi.org/10.1109/TSE.2010.63, http://ieeexplore.ieee. [21] A. Tamrawi, T.T. Nguyen, J. Al-Kofahi, T.N. Nguyen, Fuzzy set-based automatic
org/document/5487527/. bug triaging (NIER track), in: Proceeding of the 33rd International Conference
[4] S. Rastkar, G.C. Murphy, G. Murray, Summarizing software artifacts: A case on Software Engineering - ICSE ’11, ACM Press, New York, New York, USA,
study of bug reports, in: Proceedings of the 32nd ACM/IEEE International 2011, p. 884, http://dx.doi.org/10.1145/1985793.1985934, http://portal.acm.
Conference on Software Engineering - ICSE ’10, vol. 1, ACM Press, New York, org/citation.cfm?doid=1985793.1985934.
New York, USA, 2010, p. 505, http://dx.doi.org/10.1145/1806799.1806872, [22] S.K. Lukins, N.A. Kraft, L.H. Etzkorn, Source code retrieval for bug localization
http://portal.acm.org/citation.cfm?doid=1806799.1806872. using latent Dirichlet allocation, in: 2008 15th Working Conference on Reverse
[5] S. Mani, R. Catherine, V.S. Sinha, A. Dubey, AUSUM: Approach for unsuper- Engineering, IEEE, 2008, pp. 155–164, http://dx.doi.org/10.1109/WCRE.2008.
vised bug report summarization, in: Proceedings of the ACM SIGSOFT 20th 33, http://ieeexplore.ieee.org/document/4656405/.
International Symposium on the Foundations of Software Engineering - FSE ’12, [23] S. Rao, A. Kak, Retrieval from software libraries for bug localization: A
ACM Press, New York, New York, USA, 2012, p. 1, http://dx.doi.org/10.1145/ comparative study of generic and composite text models, in: Proceeding of the
2393596.2393607, http://dl.acm.org/citation.cfm?doid=2393596.2393607. 8th Working Conference on Mining Software Repositories - MSR ’11, ACM Press,
[6] R. Lotufo, Z. Malik, K. Czarnecki, Modelling the ‘hurried’ bug report reading New York, New York, USA, 2011, p. 43, http://dx.doi.org/10.1145/1985441.
process to summarize bug reports, Empir. Softw. Eng. 20 (2) (2015) 516– 1985451, http://portal.acm.org/citation.cfm?doid=1985441.1985451.
548, http://dx.doi.org/10.1007/s10664-014-9311-2, http://link.springer.com/ [24] W. Weimer, T. Nguyen, C. Le Goues, S. Forrest, Automatically finding patches
10.1007/s10664-014-9311-2. using genetic programming, in: 2009 IEEE 31st International Conference on
[7] X. Wang, L. Zhang, T. Xie, J. Anvik, J. Sun, An approach to detecting duplicate Software Engineering, IEEE, 2009, pp. 364–374, http://dx.doi.org/10.1109/ICSE.
bug reports using natural language and execution information, in: Proceedings 2009.5070536, http://ieeexplore.ieee.org/document/5070536/.
of the 13th International Conference on Software Engineering - ICSE ’08, ACM [25] C. Le Goues, M. Dewey-Vogt, S. Forrest, W. Weimer, A systematic study of
Press, New York, New York, USA, 2008, p. 461, http://dx.doi.org/10.1145/ automated program repair: Fixing 55 out of 105 bugs for $8 each, in: 2012
1368088.1368151, http://portal.acm.org/citation.cfm?doid=1368088.1368151. 34th International Conference on Software Engineering, ICSE, IEEE, 2012, pp.

22
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

3–13, http://dx.doi.org/10.1109/ICSE.2012.6227211, http://ieeexplore.ieee.org/ [47] M. Ricklin, Defect tracking best practices, 2009, https://www.stickyminds.com/
document/6227211/. article/defect-tracking-best-practices-0. (Accessed 1 March 2021).
[26] C. Le Goues, T. Nguyen, S. Forrest, W. Weimer, GenProg: A generic method [48] H. Rocha, G. de Oliveira, M.T. Valente, H. Marques-Neto, Characterizing bug
for automatic software repair, IEEE Trans. Softw. Eng. 38 (1) (2012) 54–72, workflows in mozilla firefox, in: Proceedings of the 30th Brazilian Symposium
http://dx.doi.org/10.1109/TSE.2011.104, http://ieeexplore.ieee.org/document/ on Software Engineering - SBES ’16, ACM Press, New York, New York, USA,
6035728/. 2016, pp. 43–52, http://dx.doi.org/10.1145/2973839.2973844, http://dl.acm.
[27] C. Liu, J. Yang, L. Tan, M. Hafiz, R2Fix: Automatically generating bug fixes org/citation.cfm?doid=2973839.2973844.
from bug reports, in: 2013 IEEE Sixth International Conference on Software [49] L. Floridi, Faultless responsibility: On the nature and allocation of moral
Testing, Verification and Validation, IEEE, 2013, pp. 282–291, http://dx.doi.org/ responsibility for distributed moral actions, Phil. Trans. R. Soc. A 374
10.1109/ICST.2013.24, http://ieeexplore.ieee.org/document/6569740/. (2083) (2016) 20160112, http://dx.doi.org/10.1098/rsta.2016.0112, https://
[28] X. Xia, D. Lo, M. Wen, E. Shihab, B. Zhou, An empirical study of bug royalsocietypublishing.org/doi/10.1098/rsta.2016.0112.
report field reassignment, in: 2014 Software Evolution Week - IEEE Conference [50] L. Jonsson, Increasing anomaly handling efficiency in large organizations using
on Software Maintenance, Reengineering, and Reverse Engineering, CSMR- applied machine learning, in: 2013 35th International Conference on Software
WCRE, IEEE, 2014, pp. 174–183, http://dx.doi.org/10.1109/CSMR-WCRE.2014. Engineering, ICSE, IEEE, 2013, pp. 1361–1364, http://dx.doi.org/10.1109/ICSE.
6747167, http://ieeexplore.ieee.org/document/6747167/. 2013.6606717, http://ieeexplore.ieee.org/document/6606717/.
[29] K.A. Qamar, E. Sülün, E. Tüzün, Towards a taxonomy of bug tracking process [51] L. Jonsson, D. Broman, K. Sandahl, S. Eldh, Towards automated anomaly
smells: A quantitative analysis, in: 2021 47th Euromicro Conference on Software report assignment in large complex systems using stacked generalization, in:
Engineering and Advanced Applications, SEAA, 2021, pp. 138–147, http://dx.doi. 2012 IEEE Fifth International Conference on Software Testing, Verification and
org/10.1109/SEAA53835.2021.00026. Validation, IEEE, 2012, pp. 437–446, http://dx.doi.org/10.1109/ICST.2012.124,
http://ieeexplore.ieee.org/document/6200136/.
[30] H. Rocha, G. de Oliveira, M.T. Valente, H. Marques-Neto, Characterizing bug
[52] S. Admin, 10 Tips of writing effective bug reports, 2014, https://www.
workflows in mozilla firefox, in: Proceedings of the 30th Brazilian Symposium
softwaretestingclass.com/10-tips-of-writing-efficient-defect-report/. (Accessed 1
on Software Engineering, SBES ’16, Association for Computing Machinery, New
March 2021).
York, NY, USA, 2016, pp. 43–52, http://dx.doi.org/10.1145/2973839.2973844.
[53] M. Mitrev, The anatomy of a good bug report, 2020, https://devrix.com/tutorial/
[31] M. Di Penta, D.A. Tamburri, Combining quantitative and qualitative studies
good-bug-report/. (Accessed 1 March 2021).
in empirical software engineering research, in: 2017 IEEE/ACM 39th Interna-
[54] J. Kanwal, O. Maqbool, Bug prioritization to facilitate bug report triage, J.
tional Conference on Software Engineering Companion, ICSE-C, IEEE, 2017,
Comput. Sci. Tech. 27 (2) (2012) 397–412.
pp. 499–500, http://dx.doi.org/10.1109/ICSE-C.2017.163, http://ieeexplore.
[55] K. Aggarwal, F. Timbers, T. Rutgers, A. Hindle, E. Stroulia, R. Greiner, Detecting
ieee.org/document/7965402/.
duplicate bug reports with software engineering domain knowledge, J. Softw.
[32] V. Garousi, M. Felderer, M.V. Mäntylä, Guidelines for including grey literature Evol. Process 29 (3) (2017) e1821, http://dx.doi.org/10.1002/smr.1821, http:
and conducting multivocal literature reviews in software engineering, Inf. Softw. //doi.wiley.com/10.1002/smr.1821.
Technol. 106 (2019) 101–121, http://dx.doi.org/10.1016/j.infsof.2018.09.006, [56] B. Kucuk, I. Hanhan, E. Tuzun, Characterizing duplicate bugs: Perceptions of
https://www.sciencedirect.com/science/article/pii/S0950584918301939. practitioners and an empirical analysis, J. Softw. Evol. Process (2022) http:
[33] S. Keele, et al., Guidelines for Performing Systematic Literature Reviews in //dx.doi.org/10.1002/smr.2446.
Software Engineering, Tech. Rep., Citeseer, 2007. [57] N. Bettenburg, R. Premraj, T. Zimmermann, Sunghun Kim, Duplicate bug
[34] R.L. Glass, T. DeMarco, Software Creativity 2.0, in: Online access: EBSCO reports considered harmful . . . really? in: 2008 IEEE International Conference
Computers & Applied Sciences Complete, Developer.* Books, 2006, https:// on Software Maintenance, IEEE, 2008, pp. 337–345, http://dx.doi.org/10.1109/
books.google.com.tr/books?id=DozsD0zxb5wC. ICSM.2008.4658082, http://ieeexplore.ieee.org/document/4658082/.
[35] M. Ivarsson, T. Gorschek, A method for evaluating rigor and industrial relevance [58] E. Dmytriiev, Best practices for effective defect tracking, 2016, https:
of technology evaluations, Empir. Softw. Eng. 16 (3) (2011) 365–395, http: //www.linkedin.com/pulse/best-practices-effective-defect-tracking-eugene-
//dx.doi.org/10.1007/s10664-010-9146-4. dmytriiev-l-i-o-n. (Accessed 1 March 2021).
[36] B. Kitchenham, S. Charters, Guidelines for Performing Systematic Literature [59] LuminosLabs, Writing good bug report, 2020, https://www.luminoslabs.com/
Reviews in Software Engineering, Citeseer, 2007. insights/writing-good-bug-report/. (Accessed 1 March 2021).
[37] D.S. Cruzes, T. Dyba, Recommended steps for thematic synthesis in software en- [60] P. Runeson, M. Alexandersson, O. Nyholm, Detection of duplicate defect reports
gineering, in: 2011 International Symposium on Empirical Software Engineering using natural language processing, in: 29th International Conference on Software
and Measurement, IEEE, 2011, pp. 275–284. Engineering, ICSE’07, IEEE, 2007, pp. 499–510, http://dx.doi.org/10.1109/ICSE.
[38] B. Kitchenham, S.L. Pfleeger, Principles of survey research: Part 5: Populations 2007.32, http://ieeexplore.ieee.org/document/4222611/.
and samples, ACM SIGSOFT Softw. Eng. Notes 27 (5) (2002) 17–20, http: [61] F. Zhang, F. Khomh, Y. Zou, A.E. Hassan, An empirical study on factors impacting
//dx.doi.org/10.1145/571681.571686. bug fixing time, in: 2012 19th Working Conference on Reverse Engineering, IEEE,
[39] H.B. Mann, D.R. Whitney, On a test of whether one of two random variables 2012, pp. 225–234, http://dx.doi.org/10.1109/WCRE.2012.32, http://ieeexplore.
is stochastically larger than the other, Ann. Math. Stat. (1947) 50–60, http: ieee.org/document/6385118/.
//www.jstor.org/stable/2236101. [62] P. Hooimeijer, W. Weimer, Modeling bug report quality, in: Proceedings of
the Twenty-Second IEEE/ACM International Conference on Automated Software
[40] C. Reis, R.D.M. Fortes, An overview of the software engineering process and
Engineering, 2007, pp. 34–43, http://dx.doi.org/10.1145/1321631.1321639.
tools in the mozilla project, in: Proceedings of the Open Source Software
[63] E. Shihab, A. Ihara, Y. Kamei, W.M. Ibrahim, M. Ohira, B. Adams, A.E. Hassan,
Development Workshop, (figure 1) 2002, pp. 1–21, https://www.ics.uci.edu/
K.-i. Matsumoto, Predicting re-opened bugs: A case study on the eclipse project,
~wscacchi/Software-Process/Readings/Mozilla-study.pdf.
in: 2010 17th Working Conference on Reverse Engineering, IEEE, 2010, pp.
[41] I. Aljarah, S. Banitaan, S. Abufardeh, W. Jin, S. Salem, Selecting discrimi-
249–258, http://dx.doi.org/10.1109/WCRE.2010.36, http://ieeexplore.ieee.org/
nating terms for bug assignment: A formal analysis, in: Proceedings of the
document/5645566/.
7th International Conference on Predictive Models in Software Engineering
[64] AnalyseIT247, Reopening issues after they have been resolved or closed, 2016,
- Promise ’11, ACM Press, New York, New York, USA, 2011, pp. 1–
https://www.analyseit247.com/reopening-issues/. (Accessed 1 March 2021).
7, http://dx.doi.org/10.1145/2020390.2020402, http://dl.acm.org/citation.cfm?
[65] T. Zhang, H. Jiang, X. Luo, A.T.S. Chan, A literature review of research in bug
doid=2020390.2020402.
resolution: Tasks, challenges and future directions, Comput. J. 59 (5) (2016)
[42] S. Hill, How to write a good bug report? 2015, https://leantesting.com/write-
741–773, http://dx.doi.org/10.1093/comjnl/bxv114, https://academic.oup.com/
good-bug-report/. (Accessed 1 March 2021).
comjnl/article-lookup/doi/10.1093/comjnl/bxv114.
[43] J. Bridges, 8 steps for better issue management, 2019, https://www. [66] S. Dueñas, V. Cosentino, G. Robles, J.M. Gonzalez-Barahona, Perceval: Software
projectmanager.com/training/managing-project-issues/. (Accessed 25 April project data at your will, in: Proceedings of the 40th International Conference
2021). on Software Engineering: Companion Proceeedings, ACM, New York, NY, USA,
[44] J. Anvik, L. Hiew, G.C. Murphy, Who should fix this bug? in: Proceeding of the 2018, pp. 1–4, http://dx.doi.org/10.1145/3183440.3183475, https://dl.acm.org/
28th International Conference on Software Engineering - ICSE ’06, vol. 2006, doi/10.1145/3183440.3183475.
ACM Press, New York, New York, USA, 2006, p. 361, http://dx.doi.org/10.1145/ [67] T. Sedano, P. Ralph, C. Péraire, Software development waste, in: 2017 IEEE/ACM
1134285.1134336. 39th International Conference on Software Engineering, ICSE, IEEE, 2017, pp.
[45] A. Bachmann, C. Bird, F. Rahman, P. Devanbu, A. Bernstein, The missing links: 130–140.
Bugs and bug-fix commits, in: Proceedings of the Eighteenth ACM SIGSOFT [68] K. Elena, A. Sviatoslav, How to write a bug report, 2016, https://rubygarage.
International Symposium on Foundations of Software Engineering - FSE ’10, org/blog/how-to-write-a-quality-bug-report. (Accessed 1 March 2021).
ACM Press, New York, New York, USA, 2010, p. 97, http://dx.doi.org/10.1145/ [69] Instabug, How to write a bug report: The ideal bug report, 2020, https://
1882291.1882308, http://portal.acm.org/citation.cfm?doid=1882291.1882308. instabug.com/blog/how-to-write-a-bug-report-the-ideal-bug-report/. (Accessed
[46] T.F. Bissyandé, F. Thung, S. Wang, D. Lo, L. Jiang, L. Reveillere, Empirical 1 March 2021).
evaluation of bug linking, in: 2013 17th European Conference on Software [70] Arena, Defect management—4 steps to better products & processes, 2020, https:
Maintenance and Reengineering, IEEE, 2013, pp. 89–98, http://dx.doi.org/10. //www.arenasolutions.com/resources/articles/defect-management/. (Accessed 1
1109/CSMR.2013.19. March 2021).

23
K.A. Qamar et al. Information and Software Technology 150 (2022) 106972

[71] S. Kim, M.D. Ernst, Prioritizing warning categories by analyzing software history, [86] L.D. Panjer, Predicting eclipse bug lifetimes, in: Fourth International Workshop
in: Fourth International Workshop on Mining Software Repositories (MSR’07:ICSE on Mining Software Repositories (MSR’07:ICSE Workshops 2007), 2007, p. 29,
Workshops 2007), IEEE, 2007, p. 27, http://dx.doi.org/10.1109/MSR.2007.26, http://dx.doi.org/10.1109/MSR.2007.25.
http://ieeexplore.ieee.org/document/4228664/. [87] M. Christensen, how to Write an Effective Bug Report that Actually Gets
[72] G. Canfora, L. Cerulo, Supporting change request assignment in open source de- Resolved (and Why Everyone Should), Tech. Rep., Lucidchart, 2017,
velopment, in: Proceedings of the 2006 ACM Symposium on Applied Computing https://www.lucidchart.com/blog/how-to-write-an-effective-bug-report-that-
- SAC ’06, vol. 2, ACM Press, New York, New York, USA, 2006, p. 1767, http:// actually-gets-resolved-and-why-everyone-should.
dx.doi.org/10.1145/1141277.1141693, http://portal.acm.org/citation.cfm?doid= [88] K. Somasundaram, G.C. Murphy, Automatic categorization of bug reports using
1141277.1141693. latent Dirichlet allocation, in: Proceedings of the 5th India Software Engineering
[73] J. Anvik, G.C. Murphy, Determining implementation expertise from bug reports, Conference on - ISEC ’12, ACM Press, New York, New York, USA, 2012,
in: Fourth International Workshop on Mining Software Repositories (MSR’07:ICSE pp. 125–130, http://dx.doi.org/10.1145/2134254.2134276, http://dl.acm.org/
Workshops 2007), IEEE, 2007, p. 2, http://dx.doi.org/10.1109/MSR.2007.7, http: citation.cfm?doid=2134254.2134276.
//ieeexplore.ieee.org/document/4228639/. [89] A. Sajedi-Badashian, E. Stroulia, Guidelines for evaluating bug-assignment re-
[74] J. Anvik, G.C. Murphy, Reducing the effort of bug report triage: Recommenders search, J. Softw. Evol. Process 32 (9) (2020) 2250, http://dx.doi.org/10.1002/
for development-oriented decisions, ACM Trans. Softw. Eng. Methodol. 20 (3) smr.2250, https://onlinelibrary.wiley.com/doi/abs/10.1002/smr.2250.
(2011) 1–35. [90] V. Akila, G. Zayaraz, V. Govindasamy, Effective bug triage–a framework,
[75] D. Lee, How to write a bug report that will make your engineers love you, 2020, Procedia Comput. Sci. 48 (2015) 114–120.
https://testlio.com/blog/the-ideal-bug-report/. (Accessed 1 March 2021). [91] M. Gupta, Nirikshan: Process mining software repositories to identify inefficien-
[76] S. Kim, M.D. Ernst, Which warnings should I fix first? in: Proceedings of the the cies, imperfections, and enhance existing process capabilities, in: Companion
6th Joint Meeting of the European Software Engineering Conference and the ACM Proceedings of the 36th International Conference on Software Engineering, ACM,
SIGSOFT Symposium on the Foundations of Software Engineering - ESEC-FSE ’07, New York, NY, USA, 2014, pp. 658–661, http://dx.doi.org/10.1145/2591062.
ACM Press, New York, New York, USA, 2007, p. 45, http://dx.doi.org/10.1145/ 2591080, https://dl.acm.org/doi/10.1145/2591062.2591080.
1287624.1287633, http://portal.acm.org/citation.cfm?doid=1287624.1287633. [92] M. Gupta, A. Sureka, Nirikshan: Mining bug report history for discovering
[77] T. Kremenek, D. Engler, Z-ranking: Using statistical analysis to counter the process maps, inefficiencies and inconsistencies, in: Proceedings of the 7th
impact of static analysis approximations, in: Lecture Notes in Computer Science India Software Engineering Conference on - ISEC ’14, ACM Press, New York,
(Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes New York, USA, 2014, pp. 1–10, http://dx.doi.org/10.1145/2590748.2590749,
in Bioinformatics), vol. 2694, Springer, Berlin, Heidelberg, 2003, pp. 295– http://dl.acm.org/citation.cfm?doid=2590748.2590749.
315, http://dx.doi.org/10.1007/3-540-44898-5_16, http://link.springer.com/10. [93] M. Gupta, A. Sureka, S. Padmanabhuni, Process mining multiple repositories for
1007/3-540-44898-5_16. software defect resolution from control and organizational perspective, in: Pro-
[78] Kualitee, The good, the bad and the ugly bug: Bug tracking best prac- ceedings of the 11th Working Conference on Mining Software Repositories - MSR
tices, 2019, https://www.kualitee.com/bug-management/good-bad-ugly-bug- 2014, ACM Press, New York, New York, USA, 2014, pp. 122–131, http://dx.doi.
bug-tracking-best-practices/. (Accessed 1 March 2021). org/10.1145/2597073.2597081, http://dl.acm.org/citation.cfm?doid=2597073.
[79] Axosoft, Bug tracking best practices guide, 2020, https://www.axosoft.com/bug- 2597081.
tracking-guide. (Accessed 1 March 2021). [94] P. Knab, M. Pinzger, H.C. Gall, Visual patterns in issue tracking data, in:
[80] Software Testing Help, How to write a good bug report? Tips and tricks, 2007, International Conference on Software Process, Springer, 2010, pp. 222–233,
https://www.softwaretestinghelp.com/how-to-write-good-bug-report/. (Accessed http://dx.doi.org/10.1007/978-3-642-14347-2_20.
1 March 2021). [95] C.A. Halverson, J.B. Ellis, C. Danis, W.A. Kellogg, Designing task visualizations
[81] Thought Coders, Best practices: Defect reporting techniques, 2020, https: to support the coordination of work in software development, in: Proceedings
//thoughtcoders.com/best-practices-defect-reporting-techniques/. (Accessed 1 of the 2006 20th Anniversary Conference on Computer Supported Cooperative
March 2021). Work, 2006, pp. 39–48, http://dx.doi.org/10.1145/1180875.1180883.
[82] Musescore, How to write a good bug report: Step-by-step instructions, 2015, [96] M. D’Ambros, M. Lanza, M. Pinzger, A bug’s life: Visualizing a bug database, in:
https://musescore.org/en/node/309537. (Accessed 1 March 2021). 2007 4th IEEE International Workshop on Visualizing Software for Understanding
[83] L.v. Belle, How to write a bug report that will make your developers happy, and Analysis, IEEE, 2007, pp. 113–120, http://dx.doi.org/10.1109/VISSOF.2007.
2021, https://marker.io/blog/write-bug-report. (Accessed 1 March 2021). 4290709.
[84] Apache, Issue writing guidelines, 2021, http://www.openoffice.org/bugs/bug_ [97] A.J. Ko, B.A. Myers, D.H. Chau, A linguistic analysis of how people describe soft-
writing_guidelines.html. (Accessed 1 March 2021). ware problems, in: Visual Languages and Human-Centric Computing, VL/HCC’06,
[85] 360Logica, How to write a good bug report? Tips and tricks, 2012, https://www. IEEE, 2006, pp. 127–134, http://dx.doi.org/10.1109/VLHCC.2006.3.
360logica.com/blog/how-to-write-a-good-bug-report-tips-and-tricks/. (Accessed [98] E. Doğan, E. Tuzun, Towards a taxonomy of code review smells, Inf. Softw.
1 March 2021). Technol. 142 (106737) (2022) http://dx.doi.org/10.1016/j.infsof.2021.106737.

24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy