De Araujo 2014
De Araujo 2014
De Araujo 2014
Abstract—This paper presents a systematic literature review risk of not being well accepted by users which could lead to the
of evaluation methods for collaborative systems in health area. abandonment of the system [4]. An example of these problems
This work aims to identify the state of art of evaluation methods is presented in Rigby [5]. In this case, a system was developed
as well as to detect gaps in this area. The focus of evaluation is to help the collaboration between doctors and pharmacists.
the usability and collaboration in health systems. The Systematic However, it was not used due fails in communication with
Literature Review method was used to realize the systematic
revision in ACM and IEEE Xplore digital libraries. It was found
stakeholders during the development process.
123 papers with the keywords researched. The analysis result This paper aims to find the state of art of evaluation for
shows that works related to health in general does not evaluate health collaborative systems. We also aims to identify the
the collaboration aspect. In the searched works we found that, in consolidated methods in this area as well as their advantages
general, they briefly assess the functions and usability aspects of and drawbacks. This work focuses in systems that involve just
the target system. This paper presents the study and concludes
that the healthcare information systems remain being an area
relation between people. We did not investigated systems that
that demands effort of research and study. depend of intelligence or robotics methods to substitute or
assist tasks. To realize the revision it was used the Systematic
Keywords—Evaluation; Collaborative system; Health care; De- Literature Review method (SLR) because it is a consolidated
sign; Usability method to conduct a systematic literature review [6], [2].
This paper is organized as follows. Section II presents the
I. I NTRODUCTION related works. Section III presents the systematic literature
For collaborative systems, the evaluation of the collabo- review. Section IV discusses the result of this research. Finally,
rative process is as important as identifying problems and Section V concludes this paper and indicates future directions.
evaluating usability [1]. However, usability evaluation of a
collaborative system is still a difficult and arduous process II. R ELATED W ORKS
because there are not sufficient methods that address different This section presents two works related to systematic
research contexts [2]. In usability evaluation it should be literature review. These works focus on evaluation methods
analyzed the activity as a whole, i.e. it should be evaluated for collaborative systems.
the interactions realized among users with the system, between
users and between users and all manual tasks that has to be Santos, Ferreira and Prates [2] surveyed the state of art of
executed in the system. The variables, context and metrics evaluation methods to collaborative systems. They used a Sys-
must be previously defined in order to the evaluation process tematic Literature Review (SLR) method to select the papers
be efficiently realized. The system area and its context also that were published in this area in the last twelve years. The
needs to be defined because the evaluation techniques change search was realized using evaluation and collaborative system
according to these directives [1]. terms in nine academic search engines: four of computing
and the others in events related to collaboration. The authors
Metrics and evaluation process should be defined according conclude that the evaluation of collaborative systems still has
to the research field, for example, a collaborative health system gaps to be researched. They also conclude that there are many
in order to identify problems in this area. This is necessary new methods to be consolidated. Furthermore, the authors
because the context variables are different and depending of affirm that the methods should be applied in different contexts.
the area, and these variables are important to make a relevant Another question that they appoint is that each evaluation
evaluation [1], [3]. When the system is evaluated from the method has a weight to be applied. This weight involves the
design phase, it is more likely that it will meet the user’s quantity of participants, sections to be applied, duration and
expectations. time to analyze the data. The authors indicate the importance
of the method evaluation weight be presented in order to
In this paper we focus on the evaluation of health collabora-
researchers know how much they will spend to conclude the
tive systems. This area was chosen because we have identified
method.
a lack of evaluation methods available. In the area of software
engineering is well known that software development projects Antunes et al [1] states that collaborative systems evalua-
that ignore end user involvement during the design phase are at tion is a difficult process because it involves selection of vari-
366
978-1-4799-3776-9/14/$31.00 ©2014 IEEE
ables, context and appropriate metrics. The authors presents PDF or paid papers; workshops, abstracts, posters, panels, lec-
a framework that indicates the steps that should be used to tures and demonstrations; papers about artificial intelligence,
collaborative systems evaluation. This framework is based on a database, e-learning or network process; papers where the
systematic literature review of collaborative system evaluation usability or collaboration were not evaluated; and (v) in case
methods. They searched in the search engines Google Scholar of duplicate works or references about same work it was
and the ACM articles published in the last ten years. They considered just the most recent paper.
used keywords that derive from “CSCW” and “evaluation”.
Table I presents the number of papers analyzed in each
The method used to literature review is not described. The
step of the methodology.
papers were classified according to the following evaluation
questions: why the evaluation is realized? How? What is
evaluated? When is it evaluated (summative or formative)? TABLE I: Papers classification according the methodology
As a result of this analysis they selected 12 works which steps
were classified according to the aforementioned issues. Each
Reviewed Step (i) Step (ii) Step (iii)
study was analyzed and the authors concluded that the methods ACM 36 15 9 4
found did not cover the same evaluation questions. Thus, the IEEE Xplore 77 36 29 8
framework was structured to help researchers to select the Total 113 51 38 12
evaluation methods for collaborative systems.
During each research step it was collected data such as:
Both works classify the papers obtained by the systematic title, authors, publication year and type. In the step (iii), during
review according to the method name, collect type, analysis the paper reading, we collected some data in order to keep
type and moment (summative or formative). a reading and classification pattern. The collected data were:
method used to evaluate collaboration or usability; context in
III. S YSTEMATIC L ITERATURE R EVIEW which the evaluation was realized; device type; collect type;
The systematic literature review was realized using the analysis type; moment (design, development or implantation);
SLR method [6]. This method was chosen because it is well difficulties, problems and advantages found.
structured and it indicates step by step how find and select In order to identify if the paper fits with the research goal it
relevant papers in a given area. was defined some quality criterias for the papers. Each quality
For realizing the systematic research we used the follow- criteria has a weight according with its degree of importance:
ing keywords: “collaborat*” and “health*” and “evaluat*”. 1 for low, 2 for medium and 3 for high importance. The quality
These terms are related to collaboration, health and evalua- criterias could be answered using the options “yes”, “no” and
tion. From these keywords, we found articles containing the “partial”. The answer “yes” is punctuated with value 1, “no”
following set of terms: collaboration, collaborative, collabo- with value 0 and “partial” with value 0.5. The criterias are
rate, healthcare, e-health, evaluation, evaluate. The keywords presented in Table II. The weight of each quality criteria is
should appears in abstract or title of the papers. The research multiplied by its degree of importance. The results obtained
was realized in ACM and IEEE Xplore digital libraries. These for each criteria are added. The papers that reach a total score
libraries were chosen because they are bases in computing lower than 50% (i.e lower than 5.5, according with criterias in
science. Additionally, these libraries contain many proceedings Table II) were excluded.
and conferences related to collaborative systems. We have
found papers published in the last ten years, i.e. between 2003 TABLE II: Quality criteria
and 2013, including journals, conferences and periodicals.
Quality criteria Weight
The SLR method determines that there should be a main Does the paper present in which moment the evaluation occurs? 1
and specific questions issues previously identified in order Does the paper present what device is used? 1
Does the paper present how many users participated of evaluation? 1
to find the state of the art in a particular area. The main Does the paper present problems/advantage of the evaluation method used? 2
question is related to what do you want to find in at the Does the paper answer the SQ defined? 3
end of research. The specific questions help to achieve in the Does the paper present details about the evaluation method used? 3
Total 11
result. In order to determine a general scenario it was defined
the following main question [MQ]: “Which is the actual
state of art of collaboration and usability evaluation in health IV. R ESULTS
collaborative systems?”. For the purpose of answering this
After concluding all the three steps we obtained 12 works
question three specific questions(([SQ])) were created: [SQ1]
related with collaborative system evaluation in health area.
“Do the health systems use some form to evaluate its usability
The evaluations described in these works are associated with
and collaboration?”; [SQ2] “What are the difficulties and/or
measures of usability, degree of collaboration, testing and
problems found during collaboration system evaluation?”; and
evaluation of the studied systems.
[SQ3] “In what time of process the evaluation is realized
(design, development, implementation)?”. Table III shows the classification of the selected papers.
These papers were classified according to application context
The methodology used to select the papers was by elimina-
in health area, collect type, collect moment, focus, analysis
tion considering the following steps: (i)abstract, (ii) diagonal
type and their respectively authors. The collect type was
reading and (iii) complete reading.
classified accord with data collect (questionnaire, interview),
In order to select only relevant papers the following exclu- controlled environment (test with users in laboratory), real con-
sion criterias during the elimination process: papers without text (test with user in real environment). The collect moment
367
was classified as summative — it was realized during design implementation)?”, the most of papers involves the evaluation
phase or development — or formative — it was done when the during the design or development phase. This is a positive
system had already been completed. The focus of papers were feature because if the user is involved to evaluate the system
classified as: (i) system — those that describes a system and since the design phase, in the implantation phase the project
evaluate it — or (ii) evaluation method — those that present will be more closer to reality and user expectations [19], [4].
a method evaluation that is new or already exist. Finally, the
analysis type was classified as quantitative or qualitative. From the analysis of the three specific questions it can
be answer the main question: “Which is the actual state of
In Table III we can observe that the majority of papers art of collaboration evaluation and usability evaluation of the
applies the evaluating method in design or development phase health collaborative systems?”, according to the papers studied
[9], [11], [12], [13], [14], [16]. However, some papers apply the most of health systems does not address the evaluation
the evaluation method only after the implantation phase [8], performed by the users. Even the selected papers that consider
[10], [15], [18]. Another observed issue is that majority of the evaluation of users do not present a defined evaluation
papers use qualitative evaluation methods. In section IV-A the methodology. Therefore, it can be concluded that methods for
evaluation methods are analyzed with more details. evaluating usability and collaboration in health systems are
still hard to be find. These evaluation methods has space to be
A. Analysis of results studied in academia, mainly evaluation methods that specify,
step by step, what should be done in such a way that method
With the purpose of obtaining answer for the main question can be consolidated and applied as a pattern to health system
[MQ], the specific questions were analyzed and answered evaluation.
during the papers review in step [iii]. In what follows, each
one of [SQ]s are analyzed. V. C ONCLUSION
Analysing the 12 selected papers to the first question ”Do This paper presented a systematic literature review of
health systems use some form to evaluate their usability and papers related to health collaborative systems evaluation. We
collaboration?”, it was concluded that no papers emphasized selected papers published between 2003 and 2013. After three
the evaluation with focus in collaboration. In general, the elimination steps, they were selected 12 papers. Based on
papers evaluate if the system runs properly. Some papers analysis of reviewed material we concluded that collaborative
comment the benefits of the proposed system in relation to systems with focus on health still has gaps to be studied. There
manual operation. They also realize a parallel between the are a lack of quantitative methods to evaluate health collab-
differences after and before the system development. For us- orative systems. Additionally, there are a lack of description
ability evaluation are used questionnaires, interviews or direct about the methods used for evaluating. The reviewed papers
observation, but the evaluation methods are not specified. do not present a standard form of evaluating. Thus, when the
According to Santos, Ferreira and Prates [2], evaluation evaluation has a negative result, one does not know if it is
methods for collaborative systems are still new and they due to poor structuring method or it is a really development
are not evaluated in different areas. Thus, to consolidate the problem.
existing methods for evaluating health collaborative systems As future works we intend to increase the systematic litera-
it is necessary that these new methods will be applied in ture review searching in others search engines, mainly in health
different contexts. According Teixeira [4], although there are area engines and insert new keywords like: “telemedicine”,
many developed health systems, most of these systems is not “medicine” and “medical”.
adequately evaluated. Thus, they do not present suitable for
use in real environments. Furthermore, methods for evaluating
R EFERENCES
collaboration and usability should be structured specifically for
health collaborative systems. [1] P. Antunes, V. Herskovic, S. F. Ochoa, and J. A. Pino, “Structuring
dimensions for collaborative systems evaluation,” ACM Comput.
Regarding the second specific question [SQ2] “What are Surv., vol. 44, no. 2, pp. 8:1–8:28, Mar. 2008. [Online]. Available:
the difficulties and/or problems found during collaboration http://doi.acm.org/10.1145/2089125.2089128
system evaluation?”, after the analysis we concluded that the [2] N. Santos, L. Ferreira, and R. Prates, “An overview of evaluation
methods for collaborative systems,” in Collaborative Systems (SBSC),
papers do not present the problems and advantages of its 2012 Brazilian Symposium on, 2012, pp. 127–135.
use. The authors comment, in some papers, that there are [3] L. Damianos, L. Hirschman, R. Kozierok, J. Kurtz, A. Greenberg,
a lack of users to realize evaluation. The majority of the K. Walls, S. Laskowski, and J. Scholtz, “Evaluation for collaborative
investigated papers report that with questionnaires or inter- systems,” ACM Comput. Surv., vol. 31, no. 2es, Jun. 1999. [Online].
views were not possible evaluate the system quality due to Available: http://doi.acm.org/10.1145/323216.323362
bad structuration of questionnaire or the responses provided by [4] R. R. Teixeira, “Humanizao: transformar as prticas de sade,
participants inconveniently.Additionally, none of the reviewed radicalizando os princpios do sus,” Interface - Comunicao, Sade,
Educao, vol. 13, no. 1, pp. 785–789, 2009. [Online]. Available:
studies emphasizes the negative and positive points of the used http://www.redalyc.org/articulo.oa?id=180115446030
method. Because of this, the use of assessment methods are
[5] D. Rigby, “Collaboration between doctors and pharmacists in the
not encouraged. There is lack of information about the method, community,” Australian Prescriber, vol. 33, no. 6, pp. 191–193, 2010.
such as the cost to conduct the evaluation and the difficulties [6] B. Kitchenham, O. Pearl Brereton, D. Budgen, M. Turner,
encountered while using the method. J. Bailey, and S. Linkman, “Systematic literature reviews in
software engineering - a systematic literature review,” Inf. Softw.
Finally, as result of third specific question [SQ3] “In what Technol., vol. 51, no. 1, pp. 7–15, Jan. 2009. [Online]. Available:
time of process the evaluation is realized (design, development, http://dx.doi.org/10.1016/j.infsof.2008.09.009
368
TABLE III: Selected papers classification
Paper Context Collection type Collection Focus Analysis type
moment
Jen-Hwa Hu 2003 Radiology Direct collection and real environment Summative Evaluation Qualitative and
[7] and formative method quantitative
Kantner et al. 2006 Health safety Controled environment (questionnaire, Formative System Qualitative
[8] think aloud, direct observation, log anal-
ysis)
Chronaki et al. 2007 Electronic Record Health Direct collection (questionnaire) and Summative System Qualitative
[9] (pre-hospitalar emergency real environment (observation and mon-
and clinical cases) itoring)
Sderholm et al. Medical emergency Controled environment and direct col- Formative System Qualitative
2007 [10] (paramedics) lection (questionnaire, interview, record)
Mohyuddin et al. Cancer Direct collection (questionnaire, inter- Summative System Qualitative
2008 [11] view) and real environment (direct ob- (design and
servation and analysis) development)
Coyle & Doherty Mental health Direct collection (questionnaire, inter- Summative System Qualitative
2009 [12] view) and real environment (pilot) (design and
development)
Gogan et al. 2010 Tele trauma (treatment and ur- Real environment (interview, record, an- Summative Evaluation Qualitative
[13] gency) notation) (design) method
Li et al. 2010 [14] Animal health (biosecurity) Real context (interview, record, shoot- Summative System Qualitative and
ing, annotation) (design) quantitative
Cheng et al. 2011 Quality of life related to Direct collection (questionnaire, obser- Formative System Qualitative
[15] health after cancer vation)
Lindgren 2012 [16] Dementia care Direct collection (participatory design, Summative System Qualitative
questionnaire, interview, record, annota- (design)
tion)
Mller et al. 2012 Radiology Direct collection (questionnaire) and Summative System Qualitative and
[17] real environment (observation and inter- (design) and quantitative
view) formative
Moreira et al. 2012 Medical diagnosis Real environment (observation, Formative System Qualitative
[18] response time, feedback and
questionnaire)
[7] P. Jen-Hwa Hu, “Evaluating telemedicine systems success: a revised of Australia on Computer-Human Interaction, ser. OZCHI ’10. New
model,” in System Sciences, 2003. Proceedings of the 36th Annual York, NY, USA: ACM, 2010, pp. 320–323. [Online]. Available:
Hawaii International Conference on, 2003, p. 8. http://doi.acm.org/10.1145/1952222.1952290
[8] L. Kantner, S. D. Goold, M. Danis, M. Nowak, and L. Monroe-Gatrell, [15] C. Cheng, T. Stokes, and M. Wang, “caremote: The design of a cancer
“Web tool for health insurance design by small groups: usability study,” reporting and monitoring telemedicine system for domestic care,” in
in CHI ’06 Extended Abstracts on Human Factors in Computing Engineering in Medicine and Biology Society,EMBC, 2011 Annual
Systems, ser. CHI EA ’06. New York, NY, USA: ACM, 2006, pp. 141– International Conference of the IEEE, 2011, pp. 3168–3171.
146. [Online]. Available: http://doi.acm.org/10.1145/1125451.1125484 [16] H. Lindgren, “Knowledge artifacts as tools to communicate and develop
[9] C. E. Chronaki, V. Kontoyiannis, M. Mytaras, N. Aggourakis, S. Kos- knowledge in collaborative user-driven design,” in Computer-Based
tomanolakis, T. Roumeliotaki, G. Kavlentakis, F. Chiarugi, and M. Tsik- Medical Systems (CBMS), 2012 25th International Symposium on, 2012,
nakis, “Evaluation of shared ehr services in primary healthcare centers pp. 1–6.
and their rural community offices: the twister story,” in Engineering [17] S. Moller, F. Franz, and K. Glaser-Seidnitzer, “Evaluating the per-
in Medicine and Biology Society, 2007. EMBS 2007. 29th Annual formance of multimedia presentations in communicating radiological
International Conference of the IEEE, 2007, pp. 6421–6424. findings,” in Quality of Multimedia Experience (QoMEX), 2012 Fourth
[10] H. M. Söderholm, D. H. Sonnenwald, B. Cairns, J. E. International Workshop on, 2012, pp. 45–50.
Manning, G. F. Welch, and H. Fuchs, “The potential impact [18] A. Moreira, V. Vieira, and J. del Arco, “Sanar: A collaborative en-
of 3d telepresence technology on task performance in emergency vironment to support knowledge sharing with medical artifacts,” in
trauma care,” in Proceedings of the 2007 international ACM Collaborative Systems (SBSC), 2012 Brazilian Symposium on, 2012,
conference on Supporting group work, ser. GROUP ’07. New pp. 35–42.
York, NY, USA: ACM, 2007, pp. 79–88. [Online]. Available:
[19] H. S. Jenny Preece, Yvonne Rogers, Design de interao. Bookman,
http://doi.acm.org/10.1145/1316624.1316636
2005.
[11] Mohyuddin, W. A. Gray, H. Bailey, C. Jordan, and D. Morrey, “Wireless
patient information provision and sharing at the point of care using a
virtual organization framework in clinical work,” in Pervasive Com-
puting and Communications, 2008. PerCom 2008. Sixth Annual IEEE
International Conference on, 2008, pp. 710–714.
[12] D. Coyle and G. Doherty, “Clinical evaluations and collaborative
design: developing new technologies for mental healthcare
interventions,” in Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, ser. CHI ’09. New
York, NY, USA: ACM, 2009, pp. 2051–2060. [Online]. Available:
http://doi.acm.org/10.1145/1518701.1519013
[13] J. Gogan, R. Baxter, M. Garfield, and C. Usoff, “Two pilot tests of
it-enabled collaboration in emergency healthcare: Evaluating relational
feasibility and system acceptance,” in System Sciences (HICSS), 2010
43rd Hawaii International Conference on, 2010, pp. 1–10.
[14] J. Li, C. Müller-Tomfelde, and A. Hyatt, “Supporting collaborations
across a biocontainment barrier,” in Proceedings of the 22nd
Conference of the Computer-Human Interaction Special Interest Group
369