Corba: Crowdsourcing to Obtain Requirements from Regulations and Breaches
Context: Modern software systems are deployed in sociotechnical settings, combining social entities (humans and organizations) with technical entities (software and devices). In such settings, on top of technical controls that implement security features of software, regulations specify how users should behave in security-critical situations. No matter how carefully the software is designed and how well regulations are enforced, such systems are subject to breaches due to social (user misuse) and technical (vulnerabilities in software) factors. Breach reports, often legally mandated, describe what went wrong during a breach and how the breach was remedied. However, breach reports are not formally investigated in current practice, leading to valuable lessons being lost regarding past failures.
Objective: Our research aim is to aid security analysts and software developers in obtaining a set of legal, security, and privacy requirements, by developing a crowdsourcing methodology to extract knowledge from regulations and breach reports.
Method: We present Corba, a methodology that leverages human intelligence via crowdsourcing, and extracts requirements from textual artifacts in the form of regulatory norms. We evaluate Corba on the US healthcare regulations from the Health Insurance Portability and Accountability Act (HIPAA) and breach reports published by the US Department of Health and Human Services (HHS). Following this methodology, we have conducted a pilot and a final study on the Amazon Mechanical Turk crowdsourcing platform.
Results: Corba yields high quality responses from crowd workers, which we analyze to identify requirements for the purpose of complementing HIPAA regulations. We publish a curated dataset of the worker responses and identified requirements.
Conclusions: The results show that the instructions and question formats presented to the crowd workers significantly affect the response quality regarding the identification of requirements. We have observed significant improvement from the pilot to the final study by revising the instructions and question formats. Other factors, such as worker types, breach types, or length of reports, do not have notable effect on the workers’ performance. Moreover, we discuss other potential improvements such as breach report restructuring and text highlighting with automated methods.
Fri 10 JulDisplayed time zone: (UTC) Coordinated Universal Time change
16:05 - 17:05
|Caspar: Extracting and Synthesizing User Stories of Problems from App ReviewsTechnical|
|Dealing with Non-Functional Requirements in Model-Driven Development: A SurveyJ1|
David Ameller Universitat Politècnica de Catalunya, Xavier Franch Universitat Politècnica de Catalunya, Cristina Gómez Universitat Politècnica de Catalunya, Silverio Martínez-Fernández UPC-BarcelonaTech, João Araújo Universidade Nova de Lisboa, Stefan Biffl Vienna University of Technology, Jordi Cabot ICREA - UOC, Vittorio Cortellesa University of L’Aquila, Daniel Mendez Technische Universität München, Ana Moreira FCT / Universidade Nova de Lisboa, Henry Muccini University of L'Aquila, Italy, Antonio Vallecillo University of Málaga, Spain, Manuel Wimmer Johannes Kepler University Linz, Vasco Amaral Universidade Nova de Lisboa, Wolfang Böhm Technische Universität München, Hugo Brunelière Inria, Mines Nantes & LINA, Loli Burgueño Universidad de Malaga, Miguel Goulao NOVA-LINCS, FCT/UNL, Sabine Teufl Fortiss GmbH, Luca Berardinelli Johannes Kepler University Linz
|Locating Latent Design Information in Developer Discussions: A Study on Pull RequestsJ1|
|Status Quo in Requirements Engineering: A Theory and a Global Family of SurveysJ1|
Stefan Wagner University of StuttgartLink to publication DOI Pre-print
|Corba: Crowdsourcing to Obtain Requirements from Regulations and BreachesJ1|
|With Registered Reports Towards Large Scale Data CurationNIER|
New Ideas and Emerging Results
Steffen Herbold University of GöttingenPre-print