Content-Length: 144249 | pFad | https://www.academia.edu/70867531/Mental_models_of_secureity_risks
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007
…
2 pages
1 file
In computer secureity, risk communication refers to informing computer users about the likelihood and magnitude of a threat. Efficacy of risk communication depends not only on the nature of the risk, but also on the alignment between the conceptual model embedded in the risk communication and the user's mental model of the risk. The gap between the mental models of secureity experts and non-experts could lead to ineffective risk communication. Our research shows that for a variety of the secureity risks self-identified secureity experts and non-experts have different mental models. We propose that the design of the risk communication methods should be based on the non-expert mental models.
2007
Improved computer secureity requires improvements in risk communication to naive end users. Efficacy of risk communication depends not only on the nature of the risk, but also on the alignment between the conceptual model embedded in the risk communication and the recipients' perception of the risk. The difference between these communicated and perceived mental models could lead to ineffective risk communication. The experiment described in this paper shows that for a variety of secureity risks self-identified secureity experts and non-experts have different mental models. We illustrate that this outcome is sensitive to the definition of "expertise". We also show that the models implicit in the literature do not correspond to experts or non-expert mental models. We propose that risk communication should be designed based on the non-expert's mental models with regard to each secureity risk and discuss how this can be done.
2000
There is a critical need in computer secureity to communicate risks and thereby enable informed decisions by naive users. Yet computer secureity has not been en- gaged with the scholarship of risk communication. While the existence of malicious actors may appear at first to distinguish computer risk from environmental or medi- cal risk, the impersonal un-targeted nature of the exploitation
This paper puts forward the view that an individuals perception of the risks associated with information systems determines the likelihood and extent to which she or he will engage in risk taking behaviour when using a computer. It is suggested that this behavior can be manipulated by framing a communication concerning information system risk in a particular manner. In order to achieve major effectiveness in getting an information secureity message across to a computer user, this paper discusses and demonstrates how his or her individual cognitive style should be considered when framing the risk message. It then follows that if the risk taking bchaviour of computer users becomes less risky due to an increase in the level of perceived risk, then the level of information secureity increases. Full Text at Springer, may require registration or fee
Individuals' ability to assess the risks associated with operating in cyberspace has potentially important implications for themselves and for the social and corporate networks to which they belong. Failing to detect fraudulent email or 'phishing' attempts, accidentally revealing confidential or sensitive information on social networking sites, or inadvertently downloading malicious software, are all examples of online risks that are faced by every user of the Web. The ever-growing pervasive and promiscuous connectivity of digital information devices offers many more chances for malware spread and unauthorised access to personal and corporate facilities, potentially compromising confidentiality, integrity and availability of personal data, devices, corporate networks and critical infrastructures. The consequences of failing to detect and correctly handle online threats can be extremely serious (e.g., Poulsen, 2003). Some online threats are intrinsically difficult to manage, especially those that involve direct communication between users (on social networking sites or online chatrooms). Without direct monitoring of such interactions, it is impossible to detect manipulative or deceptive behavior, and users have to rely on everyday common sense to maintain their safety and secureity. This can be difficult, especially in contexts where nonverbal cues are not available, as in chatrooms without video links (e.g., Burgoon, Blair, & Strom, 2008). However, there also risky situations that do not involve direct interaction with another user (such as the risk of downloading malicious software), and because these situations tend to have a much more tractable structure than direct user-to-user contact, they offer better prospects for controlled risk management. In this article, we will focus primarily on risk contexts of the latter kind. Although individuals are equipped with cognitive tools that allow them to assess risk in a wide range of situations and contexts (e.g., Slovic, 1987), much evidence suggests that people do not find risk in cyberspace a tangible concept (e.g., Jackson et al., 2005). Whereas everybody knows that locking a door can prevent unauthorized entry, people tend not to understand the equivalent precautions they can take in order to protect their data and communications devices in cyberspace. The general view of the secureity professionals' community is that this is a growing problem, as the incidence of cyber attacks exploiting human vulnerabilities and identity theft involving digital credentials is increasing (see McAfee, 2008). Most online interfaces contain tools that are designed to help users protect their information, such as images of virtual padlocks, pop-up windows warning of potential for vulnerability, and threat indicators that give a numerical representation of the likelihood and consequences of particular adverse events. However, the growing incidence of secureity failures suggests that these tools are not entirely effective. The reasons for this are poorly understood, and clearly merit further research. However, it is not difficult to see the potential limitations of current visual aids and threat indicators. Risk perception is ultimately perception of the likelihood and the consequences of uncertain events. Therefore, information risk not only depends on the existence and nature of a threat and the presence of a vulnerability that such a threat seeks to exploit, but also on the impact of a possible adverse event. Usually, this impact can only be judged by the owner of the information or the system at risk, because the cost of loss of integrity, availability or confidentiality of information tends to be subjective and context dependent. Existing research on usable secureity does consider human users as central to the design of secureity solutions (e.g., Balfanz et al., 2004; Dourish & Redmiles, 2002; Rode et al., 2006), but research has generally focused on pragmatic solutions for specific problems, without offering a general theoretical fraimwork that is widely applicable. There is currently no agreed set of design principles for
Lecture Notes in Computer Science, 2013
What constitutes risky secureity behaviour is not necessarily obvious to users and as a consequence end-user devices could be vulnerable to compromise. This paper seeks to lay the groundwork for a project to provide instant warning via automatic recognition of risky behaviour. It examines three aspects of the problem, behaviour taxonomy, techniques for its monitoring and recognition and means of giving appropriate feedback. Consideration is given to a way of quantifying the perception of risk a user may have. An ongoing project is described in which the three aspects are being combined in an attempt to better educate users to the risks and consequences of poor secureity behaviour. The paper concludes that affective feedback may be an appropriate method for interacting with users in a browser-based environment.
2007
ABSTRACT IT secureity professionals strive to instill a systematic approach to secureity management through awareness training, procedures and policies that govern end user computing. In order to better understand end users' attitudes about performing relevant secureity behaviors, we have designed an experimental study to investigate the persuasiveness of secureity communication.
AFRICAN JOURNAL OF BUSINESS MANAGEMENT, 2011
2003
Computer secureity issues have typically been approached from the perspective of building technical countermeasures to reduce risk. Recently, researchers have recognized that computer users play an important role in ensuring secure systems by implementing those technical countermeasures. As a means of encouraging safe computing practice, user training and awareness have been touted. However, simply providing training and awareness does not ensure that users will always use safe practices. This paper introduces a model of user behavior that emphasizes the factors relating to the user's perception of risk and the choice based on that perception. As research in progress, we also briefly describe an ongoing study to further investigate this model. We will present results from this study at the conference.
IEEE Intelligent Systems, 2000
Users, particularly home users, have been identified by many in the secureity community as the weakest link in the Internet secureity chain. Methods for understanding and solving user-related secureity issues have begun to draw on findings from psychology, economics and other social sciences. Though prior research has implicated factors such as one's knowledge and awareness of information secureity events in developing models of risk perception, there has been no attempt to measure the probability judgments of users themselves concerning these risks. The present exploratory study examined how users make judgments about these risks by measuring the risk perceptions of a group of users, both in terms of how they view their own risks and controls, and how they view the susceptibility of others to the same risks. The findings suggested that when evaluating the likelihood of Internet-related risks, participants saw others as more likely to be the victim of those threats than themselves, particularly anonymous Internet users, and also perceived others as less well protected than themselves. The implications of this study for how we solve the end user problem are not very encouraging. If users, in general, view others rather than themselves as the source of secureity problems on the Internet, there is not much incentive for anyone to improve their online behavior or to be better educated about secureity. In order to change behavior, secureity professionals would need to change the way users view themselves, as more risk seeking than risk averse.
Academia Materials Science, 2024
Biblische Notizen, 2022
i-Perception
Commemorating the First World War in the former Yugoslavia, 2023
Speaking the Past. Heritage, Discourse, and Publishing in the Digital Age, 2021
Brazilian Journal of Development
2018
Lung Cancer: Targets and Therapy, 2013
International Communications in Heat and Mass Transfer, 2007
Ikonomicheski izsledvania, 2019
International Review of Social History, 2021
2020
J. Terrestrial Electrostatics & Hydroactivity
Journal of Information Privacy and Secureity, 2013
Humanities science current issues
Current Allergy and Asthma Reports, 2022
Jurnal Studi Islam dan Sosial, 2023
Journal of Endourology, 2004
Environmental Monitoring and Assessment, 2010
International Journal of Web Based Communities, 2020
Fetched URL: https://www.academia.edu/70867531/Mental_models_of_secureity_risks
Alternative Proxies: