Manufacturers of automated systems and their components have been allocating an enormous amount of time and effort in R&D activities, which led to the availability of prototypes demonstrating new capabilities as well as the introduction of such systems to the market within different domains. Manufacturers need to make sure that the systems function in the intended way and according to specifications. This is not a trivial task as system complexity rises dramatically the more integrated and interconnected these systems become with the addition of automated functionality and features to them. This effort translates into an overhead on the V&V (verification and validation) process making it time-consuming and costly. In this paper, we present VALU3S, an ECSEL JU (joint undertaking) project that aims to evaluate the state-of-the-art V&V methods and tools, and design a multi-domain framework to create a clear structure around the components and elements needed to conduct the V&V process. The main expected benefit of the framework is to reduce time and cost needed to verify and validate automated systems with respect to safety, cyber-security, and privacy requirements. This is done through identification and classification of evaluation methods, tools, environments and concepts for V&V of automated systems with respect to the mentioned requirements. VALU3S will provide guidelines to the V&V community including engineers and researchers on how the V&V of automated systems could be improved considering the cost, time and effort of conducting V&V processes. To this end, VALU3S brings together a consortium with partners from 10 different countries, amounting to a mix of 25 industrial partners, 6 leading research institutes, and 10 universities to reach the project goal.
Verification and Validation (V&V) of automated systems is becoming more costly and time-consuming because of the increasing size and complexity of these systems. Moreover, V&V of these systems can be hindered if the methods and processes are not properly described, analysed, and selected. It is essential that practitioners use suitable V&V methods and enact adequate V&V processes to confirm that these systems work as intended and in a cost-effective manner. Previous works have created different taxonomies and models considering different aspects of V&V that can be used to classify V&V methods and tools. The aim of this work is to provide a broad, comprehensive and a easy to use framework that addresses characterisation needs, rather than focusing on individual aspects of V&V methods and processes.To this end, in this paper, we present a multi-domain and multi-dimensional framework to characterize and classify V&V methods and tools in a structured way. The framework considers a comprehensive characterization of different relevant aspects of V&V. A web-based repository has been implemented on the basis of the framework, as an example of use, in order to collect information about the application of V&V methods and tools. This way, practitioners and researchers can easily learn about and identify suitable V&V processes.
This paper presents the results of an extensive experimental study of bit-flip errors in instruction set architecture registers and main memory locations. Comprising more than two million fault injection experiments conducted with thirteen benchmark programs, the study provides insights on whether it is necessary to consider double bit-flip errors in dependability benchmarking experiments. The results show that the proportion of silent data corruptions in the program output, is almost the same for single and double bit errors. In addition, we present detailed statistics about the error sensitivity of different target registers and memory locations, including bit positions within registers and memory words. These show that the error sensitivity varies significantly between different bit positions and registers. An important observation is that injections in certain bit positions always have the same impact regardless of when the error is injected.
Manufacturers of automated systems and their components have been allocating an enormous amount of time and effort in R&D activities. This effort translates into an overhead on the V&V (verification and validation) process making it timeconsuming and costly. In this paper, we present an ECSEL JU project (VALU3S) that aims to evaluate the state-of-the-art V&V methods and tools, and design a multi-domain framework to create a clear structure around the components and elements needed to conduct the V&V process. The main expected benefit of the framework is to reduce time and cost needed to verify and validate automated systems with respect to safety, cyber-security, and privacy requirements. This is done through identification and classification of evaluation methods, tools, environments and concepts for V&V of automated systems with respect to the mentioned requirements. To this end, VALU3S brings together a consortium with partners from 10 different countries, amounting to a mix of 25 industrial partners, 6 leading research institutes, and 10 universities to reach the project goal.
The complexity of systems continues to increase rapidly, especially due to the multi-level integration of subsystems from different domains into cyber-physical systems. This results in special challenges for the efficient verification and validation (V&V) of these systems with regard to their requirements and properties. In order to tackle the new challenges and improve the quality assurance processes, the V&V workflows have to be documented and analyzed. In this paper, a novel approach for the workflow modelling of V&V activities is presented. The generic approach is tailorable to different industrial domains and their specific constraints, V&V methods, and toolchains. The outcomes comprise a dedicated modelling notation (VVML) and tool-support using the modelling framework Enterprise Architect for the efficient documentation and implementation of workflows in the use cases. The solution enables the design of re-usable workflow assets such as V&V activities and artifacts that are exchanged between workflows. This work is part of the large scale European research project VALU3S that deals with the improvement and evaluation of V&V processes in different technical domains, focusing on safety, cybersecurity, and privacy properties.
As our dependence on automated systems grows, so does the need for guaranteeing their safety, cybersecurity, and privacy (SCP). Dedicated methods for verification and validation (V&V) must be used to this end and it is necessary that the methods and their characteristics can be clearly differentiated. This can be achieved via method classifications. However, we have experienced that existing classifications are not suitable to categorise V&V methods for SCP of automated systems. They do not pay enough attention to the distinguishing characteristics of this system type and of these quality concerns. As a solution, we present a new classification developed in the scope of a large-scale industry-academia project. The classification considers both the method type, e.g., testing, and the concern addressed, e.g., safety. Over 70 people have successfully used the classification on 53 methods. We argue that the classification is a more suitable means to categorise V&V methods for SCP of automated systems and that it can help other researchers and practitioners.
Technology scaling of integrated circuits is making transistors increasingly sensitive to process variations, wear-out effects and ionizing particles. This may lead to an increasing rate of transient and intermittent errors in future microprocessors. In order to assess the risk such errors pose to safety critical systems, it is essential to investigate how temporary errors in the instruction set architecture (ISA) registers and main memory locations influence the behaviour of executing programs. To this end, we investigate - by means of extensive fault injection experiments - how such errors affect the execution of four target programs. The paper makes three contributions. First, we investigate how the failure modes of the target programs vary for different input sets. Second, we evaluate the error coverage of a software-implemented hardware fault tolerant technique that relies on triple-time redundant execution, majority voting and forward recovery. Third, we propose an approach based on assembly language metrics which can be used to correlate the dynamic fault-free behaviour of a program with its failure mode distribution obtained by fault injection.
Reasoning about safety, security, and other dependability attributes of autonomous systems is a challenge that needs to be addressed before the adoption of such systems in day-to-day life. Formal methods is a class of methods that mathematically reason about a system’s behavior. Thus, a correctness proof is sufficient to conclude the system’s dependability. However, these methods are usually applied to abstract models of the system, which might not fully represent the actual system. Fault injection, on the other hand, is a testing method to evaluate the dependability of systems. However, the amount of testing required to evaluate the system is rather large and often a problem. This vision paper introduces formal fault injection, a fusion of these two techniques throughout the development lifecycle to enhance the dependability of autonomous systems. We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection. By forging stronger ties between the two fields, we pave the way for developing safe and dependable autonomous systems. This paper delves into the integration’s potential and outlines future research avenues, addressing open challenges along the way.
As society increasingly relies on safety- and security- critical systems, the need for confirming their dependability becomes essential. Adequate V&V (verification and validation) methods must be employed, e.g., for system testing. When selecting and using the methods, it is important to analyze their possible gaps and limitations, such as scalability issues. However, and as we have experienced, common, explicitly defined criteria are seldom used for such analyses. This results in analyses that consider different aspects and to a different extent, hindering their comparison and thus the comparison of the V&V methods. As a solution, we present a set of criteria for the analysis of gaps and limitations of V&V methods for safety- and security-critical systems. The criteria have been identified in the scope of the VALU3S project. Sixty-two people from 33 organizations agreed upon the use of nine criteria: functionality, accuracy, scalability, deployment, learning curve, automation, reference environment, cost, and standards. Their use led to more homogeneous and more detailed analyses when compared to similar previous efforts. We argue that the proposed criteria can be helpful to others when having to deal with similar activities.
Fault- and attack injection are techniques used to measure dependability attributes of computer systems. An important property of such injectors is their efficiency that deals with the time and effort needed to explore the target system’s fault- or attack space. As this space is generally very large, techniques such as pre-injection analyses are used to effectively explore the space. In this paper, we study two such techniques that have been proposed in the past, namely inject-on-read and inject-on-write. Moreover, we propose a new technique called error space pruning of signals and evaluate its efficiency in reducing the space needed to be explored by fault and attack injection experiments. We implemented and integrated these techniques into MODIFI, a model-implemented fault and attack injector, which has been effectively used in the past to evaluate Simulink models in the presence of faults and attacks. To the best of our knowledge, we are the first to integrate these pre-injection analysis techniques into an injector that injects faults and attacks into Simulink models.The results of our evaluation on 11 vehicular Simulink models show that the error space pruning of signals reduce the attack space by about 30–43%, hence allowing the attack space to be exploited by fewer number of attack injection experiments. Using MODIFI, we then performed attack injection experiments on two of these vehicular Simulink models, a comfort control model and a brake-by-wire model, while elaborating on the results obtained
Context: Container-based virtualization is gaining popularity in different domains, as it supports continuous development and improves the efficiency and reliability of run-time environments. Problem: Different techniques are proposed for monitoring the security of containers. However, there are no guidelines supporting the selection of suitable techniques for the tasks at hand. Objective: We aim to support the selection and design of techniques for monitoring container-based virtualization environments. Approach:: First, we review the literature and identify techniques for monitoring containerized environments. Second, we classify these techniques according to a set of categories, such as technical characteristic, applicability, effectiveness, and evaluation. We further detail the pros and cons that are associated with each of the identified techniques. Result: As a result, we present CONSERVE, a multi-dimensional decision support framework for an informed and optimal selection of a suitable set of container monitoring techniques to be implemented in different application domains. Evaluation: A mix of eighteen researchers and practitioners evaluated the ease of use, understandability, usefulness, efficiency, applicability, and completeness of the framework. The evaluation shows a high level of interest, and points out to potential benefits. © 2021 The Authors
As vehicles become more and more connected with their surroundings and utilize an increasing number of services, they also become more exposed to threats as the attack surface increases. With increasing attack surfaces and continuing challenges of eliminating vulnerabilities, vehicles need to be designed to work even under malicious activities, i.e., under attacks. In this paper, we present a resilience framework that integrates analysis of safety and cybersecurity mechanisms. We also integrate resilience for safety and cybersecurity into the fault – error – failure chain. The framework is useful for analyzing the propagation of faults and attacks between different system layers. This facilitates identification of adequate resilience mechanisms at different system layers as well as deriving suitable test cases for verification and validation of system resilience using fault and attack injection.
Safety-critical systems are required to comply withsafety standards as well as security and privacy standards.In order to provide insights into how practitioners apply thestandards on safety, security or privacy (Sa/Se/Pr), as well ashow they employ Sa/Se/Pr analysis methodologies and softwaretools to meet such criteria, we conducted a questionnaire-basedsurvey. This paper summarizes our major analysis results of thereceived responses.
This paper presents CarFASE, an open-source carla-based fault and attack simulation engine that is used to test and evaluate the behavior of autonomous driving stacks in the presence of faults and attacks. Carla is a highly customizable and adaptable simulator for autonomous driving research. In this paper, we demonstrate the application of CarFASE by running fault injection experiments on OpenPilot, an open-source advanced driver assistance system designed to provide a suite of features such as lane keeping, adaptive cruise control, and forward collision warning to enhance the driving experience. A braking scenario is used to study the behavior of OpenPilot in the presence of brightness and salt&pepper faults. The results demonstrate the usefulness of the tool in evaluating the safety attributes of autonomous driving systems in a safe and controlled environment.
In this work, we evaluate the safety of a platoon offour vehicles under jamming attacks. The platooning applicationis provided by Plexe-veins, which is a cooperative drivingframework, and the vehicles in the platoon are equipped withcooperative adaptive cruise control controllers to represent thevehicles’ behavior. The jamming attacks investigated are modeledby extending ComFASE (a Communication Fault and AttackSimulation Engine) and represent three real-world attacks,namely, destructive interference, barrage jamming, and deceptivejamming. The attacks are injected in the physical layer of theIEEE 802.11p communication protocol simulated in Veins (avehicular network simulator). To evaluate the safety implicationsof the injected attacks, the experimental results are classifiedby using the deceleration profiles and collision incidents of thevehicles. The results of our experiments show that jammingattacks on the communication can jeopardize vehicle safety,causing emergency braking and collision incidents. Moreover,we describe the impact of different attack injection parameters(such as, attack start time, attack duration and attack value) onthe behavior of the vehicles subjected to the attacks.
Embedded electronic systems used in vehicles are becoming more exposed and thus vulnerable to different types of faults and cybersecurity attacks. Examples of these systems are advanced driver assistance systems (ADAS) used in vehicles with different levels of automation. Failures in these systems could have severe consequences, such as loss of lives and environmental damages. Therefore, these systems should be thoroughly evaluated during different stages of product development. An effective way of evaluating these systems is through the injection of faults and monitoring their impacts on these systems. In this paper, we present SUFI, a simulation-based fault injector that is capable of injecting faults into ADAS features simulated in SUMO (simulation of urban mobility). Simulation-based fault injection is usually used at early stages of product development, especially when the target hardware is not yet available. Using SUFI we target car-following and lane-changing features of ADAS modelled in SUMO. The results of the fault injection experiments show the effectiveness of SUFI in revealing the weaknesses of these models when targeted by faults and attacks.
Embedded electronic systems used in vehicles are becoming more exposed and thus vulnerable to different types of faults and cybersecurity attacks. Examples of these systems are advanced driver assistance systems (ADAS) used in vehicles with different levels of automation. Failures in these systems could have severe consequences, such as loss of lives and environmental damages. Therefore, these systems should be thoroughly evaluated during different stages of product development. An effective way of evaluating these systems is through the injection of faults and monitoring their impacts on these systems. In this paper, we present SUFI, a simulation-based fault injector that is capable of injecting faults into ADAS features simulated in SUMO (simulation of urban mobility) and analyse the impact of the injected faults on the entire traffic. Simulation-based fault injection is usually used at early stages of product development, especially when the target hardware is not yet available. Using SUFI we target car-following and lane-changing features of ADAS modelled in SUMO. The results of the fault injection experiments show the effectiveness of SUFI in revealing the weaknesses of these models when targeted by faults and attacks.
A remotely operated road vehicle (RORV) refers to a vehicle operated wirelessly from a remote location. In this paper, we report results from an evaluation of two safety mechanisms: safe braking and disconnection. These safety mechanisms are included in the control software for RORV developed by Roboauto, an intelligent mobility solutions provider. The safety mechanisms monitor the communication system to detect packet transmission delays, lost messages, and outages caused by naturally occurring interference as well as denial-of-service (DoS) attacks. When the delay in the communication channel exceeds certain threshold values, the safety mechanisms are to initiate control actions to reduce the vehicle speed or stop the affected vehicle safely as soon as possible. To evaluate the effectiveness of the safety mechanisms, we exposed the vehicle control software to various communication failures using a software-in-the-loop (SIL) testing environment developed specifically for this study. Our results show that the safety mechanisms behaved correctly for a vast majority of the simulated communication failures. However, in a few cases, we noted that the safety mechanisms were triggered incorrectly, either too early or too late, according to the system specification.
This paper presents ComFASE, a communication fault and attack simulation engine. ComFASE is used to identify and evaluate potentially dangerous behaviours of interconnected automated vehicles in the presence of faults and attacks in wireless vehicular networks. ComFASE is built on top of OMNET++ (a network simulator) and integrates SUMO (a traffic simulator) and Veins (a vehicular network simulator). The tool is flexible in modelling different types of faults and attacks and can be effectively used to study the interplay between safety and cybersecurity attributes by injecting cybersecurity attacks and evaluating their safety implications. To demonstrate the tool, we present results from a series of simulation experiments, where we injected delay and denial-of-service attacks on wireless messages exchanged between vehicles in a platooning application. The results show how different variants of attacks influence the platooning system in terms of collision incidents.
ISA-level fault injection, i.e. the injection of bit-flip faults in Instruction Set Architecture (ISA) registers and main memory words, is widely used for studying the impact of transient and intermittent hardware faults in computer systems. This paper compares two techniques for ISA-level fault injection: inject-on-read, and inject-on-write. The first technique injects bit-flips in a data-item (the content of a register or memory word) just before the data-item is read by a machine instruction, while the second one injects bit-flips in a data-item just after it has been updated by a machine instruction. In addition, the paper compares two variants of inject-on-read, one where all faults are given the same weight and one where weight factors are used to reflect the time a data-item spends in a register or memory word. The weighted injected-on-read aims to accurately model soft errors that occur when an ionizing particle perturbs a data-item while it resides in an ISA register or a memory word. This is in contrast to inject-on-write, which emulates errors that propagate into an ISA register or a memory word. Our experiments show significant differences in the results obtained with the three techniques.
In this paper we study the impact of compiler optimizations on the error sensitivity of twelve benchmark programs. We conducted extensive fault injection experiments where bit-flip errors were injected in instruction set architecture registers and main memory locations. The results show that the percentage of silent data corruptions (SDCs) in the output of the optimized programs is only marginally higher compare to that observed for the non-optimized programs. This suggests that compiler optimizations can be used in safety- and mission-critical systems without increasing the risk that the system produces undetected erroneous outputs. In addition, we investigate to what extent the source code implementation of a program affects the error sensitivity of a program. To this end, we perform experiments with five implementations of a bit count algorithm. In this investigation, we consider the impact of the implementation as well as compiler optimizations. The results of these experiments give valuable insights into how compiler optimizations can be used to reduce error sensitive of registers and main memory sections. They also show how sensitive locations requiring additional protection, e.g., by the use of software-based fault tolerance techniques, can be identified.
This paper presents an experimental study of the fault sensitivity of four programs included in the MiBench test suit. We investigate their fault sensitivity with respect to hardware faults that manifest as single bit flips in main memory locations and instruction set architecture registers. To this end, we have conducted extensive fault injection experiments with two versions of each program, one basic version and one where the program is equipped with software-implemented hardware fault tolerance (SIHFT) through triple time redundant execution, majority voting and forward recovery (TTR-FR). The results show that TTR-FR achieves an error coverage between 94.6% and 99.2%, while the non-fault-tolerant versions achieve an error coverage between 55.8% and 81.1%. To gain understanding of the origin of the non-covered faults, we provide statistics on the fault sensitivity of different source code blocks, physical fault locations (instruction set architecture registers and main memory words) and different categories of machine instructions.
Embedded electronic systems need to be equipped with different types of security mechanisms to protect themselves and to mitigate the effects of cybersecurity attacks. These mechanisms should be evaluated with respect to their impacts on dependability and security attributes such as availability, reliability, safety, etc. The evaluation is of great importance as, e.g., a security mechanism should never violate the system safety. Therefore, in this paper, we evaluate a comprehensive set of security mechanisms consisting of 17 different types of mechanisms with respect to their impact on dependability and security attributes. The results show that, in general, the use of these mechanisms have positive effect on system dependability and security. However, there are at least three mechanisms that could have negative impacts on system dependability by violating safety and availability requirements. The results support our claim that the analyses such as the ones conducted in this paper are necessary when selecting and implementing an optimal set of safety and security mechanisms.
The combination of high mobility and wireless communication in many safety-critical systems have increased their exposure to malicious security threats. Consequently, many works in the past have proposed solutions to ensure safety and security of these systems. However, not much attention has been given to the interplay between these two groups of nonfunctional requirements. This is a concern as safety solutions may negatively impact system security and vice versa. This paper addresses the interplay between safety and security by proposing an attack injection framework, based on model-implemented fault injection, suitable for model-based design. The framework enables us to study and evaluate the impact of cybersecurity attacks on system safety early in the development process. To this end, we have implemented six attack injection models and conducted experiments on Simulink models of a CAN bus and a brake-by-wire controller. The results show that the security attacks modeled could successfully impact the system safety by violating our defined safety requirements.
ISA-level fault injection, i.e. the injection of bitflip faults in Instruction Set Architecture (ISA) registers and main memory words, is widely used for studying the impact of transient and intermittent hardware faults. ISA-level fault injection tools can be characterized by different properties such as repeatability, observability, reachability, intrusiveness, efficiency and controllability. This paper presents two preinjection analysis techniques that improve controllability and efficiency using object code analysis. To improve controllability, we propose a technique for identifying the type of data that is stored in a potential target location. This allows the user to selectively direct fault injections to addresses, data and/or control information. Experimental results show that the data type of 84-100% of the targets locations in 8 programs were successfully identified by this technique. The second technique improves efficiency by fault pruning, i.e., by avoiding injection of faults that is known a priori to be detected by the tested system. This technique leverage the fact that faults in certain bits in the program counter and the stack pointer are always detected by machine exceptions. We show that exclusion of these bits from the fault space could significantly prune the fault space and reduce the time it takes to conduct a fault injection campaign.
The quality of captured traffic plays an important role for decisions made by systems like intrusion detection/prevention systems (IDS/IPS) and firewalls. As these systems monitor network traffic to find malicious activities, a missing packet might lead to an incorrect decision. In this paper, we analyze the quality of packet-level traces collected on Internet backbone links using different generations of DAG cards. This is accomplished by inferring dropped packets introduced by the data collection system with help of the intrinsic structural properties inherently provided by TCP traffic flows. We employ two metrics which we believe can detect all kinds of missing packets: i) packets with ACK numbers greater than the expected ACK, indicating that the communicating parties acknowledge a packet not present in the trace; and ii) packets with data beyond the receiver's window size, which with a high probability, indicates that the packet advertising the correct window size was not recorded. These heuristics have been applied to three large datasets collected with different hardware and in different environments. We also introduce flowstat, a tool developed for this purpose which is capable of analyzing both captured traces and real-time traffic. After assessing more than 400 traces (75M bidirectional flows), we conclude that at least 0.08% of the flows have missing packets, a surprisingly large number that can affect the quality of analysis performed by firewalls and intrusion detection/prevention systems. The paper concludes with an investigation and discussion of the spatial and temporal aspects of the experienced packet losses and possible reasons behind missing data in traces.
Recent studies have shown that technology and voltage scaling are expected to increase the likelihood that particle-induced soft errors manifest as multiple-bit errors. This raises concerns about the validity of using single bit-flips for assessing the impact of soft errors in fault injection experiments. The goal of this paper is to investigate whether multiple-bit errors could cause a higher percentage of silent data corruptions (SDCs) compared to single-bit errors. Based on 2700 fault injection campaigns with 15 benchmark programs, featuring a total of 27 million experiments, our results show that single-bit errors in most cases yields a higher percentage of SDCs compared to multiple-bit errors. However, in 8% of the campaigns we observed a higher percentage of SDCs for multiple-bit errors. For most of these campaigns, the highest percentage of SDCs was obtained by flipping at most 3 bits. Moreover, we propose three ways of pruning the error space based on the results.
Recent studies have shown that technology and voltage scaling are expected to increase the likelihood that particle-induced soft errors manifest as multiple-bit errors. This raises concerns about the validity of using single bit-flips in fault injection experiments aiming to assess the program-level impact of soft errors. The goal of this paper is to investigate whether multiple-bit errors could cause a higher percentage of silent data corruptions (SDCs) compared to single-bit errors. Based on 2700 fault injection campaigns with 15 benchmark programs, featuring a total of 27 million experiments, our results show that single-bit errors in most cases either yield a higher percentage of SDCs compared to multiple-bit errors or yield SDC results that are very close to the ones obtained for the multiple-bit errors. Further, we find that only around 2% of the multiple-bit campaigns resulted in an SDC percentage that was more than 5 percentage points higher than that obtained for the corresponding single-bit campaigns. For most of these campaigns, the highest percentage of SDCs was obtained by flipping at most 3 bits. Based on our results, we also propose four techniques for error space pruning to avoid injection of multiple-bit errors that are either unlikely or infeasible to cause SDCs.
Cyber-Physical Systems (CPSs) are increasingly used in various safety-critical domains; assuring the safety of these systems is of paramount importance. Fault Injection is known as an effective testing method for analyzing the safety of CPSs. However, the total number of faults to be injected in a CPS to explore the entire fault space is normally large and the limited budget for testing forces testers to limit the number of faults injected by e.g., random sampling of the space. In this paper, we propose DELFASE as an automated solution for fault space exploration that relies on Generative Adversarial Networks (GANs) for optimizing the identification of critical faults, and can run in two modes: active and passive. In the active mode, an active learning technique called ranked batch-mode sampling is used to select faults for training the GAN model with, while in the passive mode those faults are selected randomly. The results of our experiments on an adaptive cruise control system show that compared to random sampling, DELFASE is significantly more effective in revealing system weaknesses. In fact, we observed that compared to random sampling that resulted in a fault coverage of around 10%, when using the active and passive modes, the fault coverage of DELFASE could be as high as 89% and 81%, respectively.
Safety-critical systems are required to comply with safety standards. These systems are increasingly digitized and networked to an extent where they need to also comply with security and privacy standards. This paper aims to pro-vide insights into how practitioners apply the standards on safety, security or pri-vacy (Sa/Se/Pr), as well as how they employ Sa/Se/Pr analysis methodologies and software tools to meet such criteria. To this end, we conducted a question-naire-based survey within the participants of an EU project SECREDAS and ob-tained 21 responses. The results of our survey indicate that safety standards are widely applied by product and service providers, driven by the requirements from clients or regulators/authorities. When it comes to security standards, practition-ers face a wider range of standards while few target specific industrial sectors. Some standards linking safety and security engineering are not widely used at the moment, or practitioners are not aware of this feature. For privacy engineering, the availability and usage of standards, analysis methodologies and software tools are relatively weaker than safety and security, reflecting the fact that privacy en-gineering is an emerging concern for practitioners.
When introducing automated driving systems (ADS), it is imperative that there exist mutual agreements between the ADS and stakeholders – such as the ADS equipped vehicle user, other road users, and society at large – on how the ADS should behave. Lacking such agreements, the ADS may antagonize stakeholders and, even worse, pose severe safety risks. The ADS needs a complete and unambiguous set of machine-interpretable properties describing these interactions, while the human stakeholders need to understand and accept how the ADS is designed to behave. We propose to make these considerations explicit in the form of agreements. The completeness problem is tackled by cataloguing and categorizing all agreements that need to be considered during the lifetime of an ADS in a systematic way.
The complexity of developing embedded electronic systems has been increasing especially in the automotive domain due to recently added functional requirements concerning e.g., connectivity. The development of these systems becomes even more complex for products - such as connected automated driving systems – where several different quality attributes (such as functional safety and cybersecurity) need to also be taken into account. In these cases, there is often a need to adhere to several standards simultaneously, each addressing a unique quality attribute. In this paper, we analyze potential synergies when working with both a functional safety standard (ISO 26262) and a cybersecurity standard (first working draft of ISO/SAE 21434). The analysis is based on a use case developing a positioning component for the automotive domain. The results regarding the use of multi-concern development lifecycle is on a high level, since most of the insights into co-engineering presented in this paper is based on process modeling. The main findings of our analysis show that on the design-side of the development lifecycle, the big gain is completeness of the analysis when considering both attributes together, but the overlap in terms of shared activities is small. For the verification-side of the lifecycle, much of the work and infrastructure can be shared when showing fulfillment of the two standards ISO 26262 and ISO/SAE 21434.
Verification and validation (V&V) are complex processes combining different approaches and incorporating many different methods including many activities. System engineers regularly face the question if their V&V activities lead to better products, and having appropriate criteria at hand for evaluation of safety and cybersecurity of the systems would help to answer such a question. Additionally, when there is a demand to improve the quality of an already managed V&V process, there is a struggle over what criteria to use in order to measure the improvement. This paper presents an extensive set of criteria suitable for safety and cybersecurity evaluation of cyberphysical systems. The evaluation criteria are agreed upon by 60 researchers from 32 academic and industrial organizations jointly working in a large-scale European research project on 13 real-world use cases from the domains of automotive, railway, aerospace, agriculture, healthcare, and industrial robotics.
As more parts of the power grid become connected to the internet, the risk of cyberattacks increases. To identify the cybersecurity threats and subsequently reduce vulnerabilities, the common practice is to carry out a cybersecurity risk assessment. For safety classified systems and products, there is also a need for safety risk assessments in addition to the cybersecurity risk assessment in order to identify and reduce safety risks. These two risk assessments are usually done separately, but since cybersecurity and functional safety are often related, a more comprehensive method covering both aspects is needed. Some work addressing this has been done for specific domains like the automotive domain, but more general methods suitable for, e.g., Intelligent Distributed Grids, are still missing. One such method from the automotive domain is the Security-Aware Hazard Analysis and Risk Assessment (SAHARA) method that combines safety and cybersecurity risk assessments. This paper presents an approach where the SAHARA method has been modified in order to be more suitable for larger distributed systems. The adapted SAHARA method has a more general risk assessment approach than the original SAHARA. The proposed method has been successfully applied on two use cases of an intelligent distributed grid.
As more parts of the power grid become connected to the internet, the risk of cyberattacks increases. To identify the cybersecurity threats and subsequently reduce vulnerabilities, the common practice is to carry out a cybersecurity risk assessment. For safety classified systems and products, there is also a need for safety risk assessments in addition to the cybersecurity risk assessment to identify and reduce safety risks. These two risk assessments are usually done separately, but since cybersecurity and functional safety are often related, a more comprehensive method covering both aspects is needed. Some work addressing this has been done for specific domains like the automotive domain, but more general methods suitable for, e.g., Intelligent Distributed Grids, are still missing. One such method from the automotive domain is the Security-Aware Hazard Analysis and Risk Assessment (SAHARA) method that combines safety and cybersecurity risk assessments. This paper presents an approach where the SAHARA method has been modified to be more suitable for larger distributed systems. The adapted SAHARA method has a more general risk assessment approach than the original SAHARA. The proposed method has been successfully applied on two use cases of an intelligent distributed grid.
Increasing communication and self-driving capabilities for road vehicles lead to threats which could potentially be exploited by attackers. Especially attacks leading to safety violations have to be identified to address them by appropriate measures. The impact of an attack depends on the threat exploited, potential countermeasures and the traffic situation. In order to identify such attacks and to use them for testing, we propose the systematic approach SaSeVAL for deriving attacks of autonomous vehicles.
SaSeVAL is based on threats identification and safety-security analysis. The impact of automotive use cases to attacks is considered. The threat identification considers the attack interface of vehicles and classifies threat scenarios according to threat types, which are then mapped to attack types. The safety-security analysis identifies the necessary requirements which have to be tested based on the architecture of the system under test. It determines which safety impact a security violation may have, and in which traffic situations the highest impact is expected. Finally, the results of threat identification and safety-security analysis are used to describe attacks.
The goal of SaSeVAL is to achieve safety validation of the vehicle w.r.t. security concerns. It traces safety goals to threats and to attacks explicitly. Hence, the coverage of safety concerns by security testing is assured. Two use cases of vehicle communication and autonomous driving are investigated to prove the applicability of the approach.