The digitization of a supply chain involves satisfying several functional and non-functional context specific requirements. The work presented herein builds on efforts to elicit trust and profit requirements from actors in the Swedish livestock supply chain, specifically the beef supply chain. Interviewees identified several benefits related to data sharing and traceability but also emphasized that these benefits could only be realized if concerns around data security and data privacy were adequately addressed. We developed a data sharing platform as a response to these requirements. Requirements around verifiability, traceability, secure data sharing of potentially large data objects, fine grained access control, and the ability to link together data objects was realized using distributed ledger technology and a distributed file system. This paper presents this data sharing platform together with an evaluation of its usefulness in the context of beef supply chain traceability.
We propose a simple and efficient searchable symmetric encryption scheme based on a Bitmap index that evaluates Boolean queries. Our scheme provides a practical solution in settings where communications and computations are very constrained as it offers a suitable trade-off between privacy and performance.
Manufacturers of automated systems and their components have been allocating an enormous amount of time and effort in R&D activities, which led to the availability of prototypes demonstrating new capabilities as well as the introduction of such systems to the market within different domains. Manufacturers need to make sure that the systems function in the intended way and according to specifications. This is not a trivial task as system complexity rises dramatically the more integrated and interconnected these systems become with the addition of automated functionality and features to them. This effort translates into an overhead on the V&V (verification and validation) process making it time-consuming and costly. In this paper, we present VALU3S, an ECSEL JU (joint undertaking) project that aims to evaluate the state-of-the-art V&V methods and tools, and design a multi-domain framework to create a clear structure around the components and elements needed to conduct the V&V process. The main expected benefit of the framework is to reduce time and cost needed to verify and validate automated systems with respect to safety, cyber-security, and privacy requirements. This is done through identification and classification of evaluation methods, tools, environments and concepts for V&V of automated systems with respect to the mentioned requirements. VALU3S will provide guidelines to the V&V community including engineers and researchers on how the V&V of automated systems could be improved considering the cost, time and effort of conducting V&V processes. To this end, VALU3S brings together a consortium with partners from 10 different countries, amounting to a mix of 25 industrial partners, 6 leading research institutes, and 10 universities to reach the project goal.
Verification and Validation (V&V) of automated systems is becoming more costly and time-consuming because of the increasing size and complexity of these systems. Moreover, V&V of these systems can be hindered if the methods and processes are not properly described, analysed, and selected. It is essential that practitioners use suitable V&V methods and enact adequate V&V processes to confirm that these systems work as intended and in a cost-effective manner. Previous works have created different taxonomies and models considering different aspects of V&V that can be used to classify V&V methods and tools. The aim of this work is to provide a broad, comprehensive and a easy to use framework that addresses characterisation needs, rather than focusing on individual aspects of V&V methods and processes.To this end, in this paper, we present a multi-domain and multi-dimensional framework to characterize and classify V&V methods and tools in a structured way. The framework considers a comprehensive characterization of different relevant aspects of V&V. A web-based repository has been implemented on the basis of the framework, as an example of use, in order to collect information about the application of V&V methods and tools. This way, practitioners and researchers can easily learn about and identify suitable V&V processes.
To address the emerging challenges in electricity distribution networks, various solutions have been proposed such as alternative tariff design, local flexibility markets (LFMs), bilateral contracts, and local energy markets (LEMs). However, choosing a suitable solution is not straightforward due to multi-dimensional complexity of the challenges which may vary under different circumstances. This paper proposes a toolbox for qualitative and quantitative comparison of the different solutions. The toolbox includes a multi-dimensional analytical framework and a flexible modeling and demonstration platform for conducting quantitative comparison studies. Four solutions i.e. LFM, LEM, cost-reflective tariffs, and bilateral contracts are compared qualitatively using the framework and a real demonstration example of an LFM design is presented utilizing the modeling platform. The toolbox can facilitate research on the local grid challenges and contribute to finding a suitable solution from a multi-dimensional perspective.
Active mobility, such as biking, faces a common challenge in Swedish municipalities due to the lack of adequate lighting during the dark winter months. Insufficient lighting infrastructure hinders individuals from choosing bicycles, despite the presence of well-maintained bike paths and a willingness to cycle. To address this issue, a project has been undertaken in the Swedish municipality of Skara for an alternative lighting solution using drones. A series of tests have been conducted based on drone prototypes developed for the selected bike paths. Participants were invited to cycle in darkness illuminated by drone lighting and share their mobility preferences and perception. This paper summarizes the users’ perception of drone lighting as an alternative to fixed lighting on bike paths, with a special focus on the impact on travel habits and the perceived sense of security and comfort. Most participants were regular cyclists who cited bad weather, time, and darkness as significant factors that deterred them from using bicycles more frequently, reducing their sense of security. With drone lighting, the participants appreciated the illumination’s moonlight-like quality and its ability to enhance their sense of security by illuminating the surroundings. On the technology side, they gave feedback on reducing the drone’s sound and addressing lighting stability issues. In summary, the test results showcase the potential of drone lighting as a viable alternative to traditional fixed lighting infrastructure, offering improved traffic safety, sense of security, and comfort. The results show the feasibility and effectiveness of this innovative approach, supporting transformation towards active and sustainable mobility, particularly in regions facing lighting challenges.
The Authentication and Authorization for ConstrainedEnvironments (ACE) framework provides fine-grainedaccess control in the Internet of Things, where devices areresource-constrained and with limited connectivity. The ACEframework defines separate profiles to specify how exactlyentities interact and what security and communication protocolsto use. This paper presents the novel ACE IPsec profile, whichspecifies how a client establishes a secure IPsec channel witha resource server, contextually using the ACE framework toenforce authorized access to remote resources. The profilemakes it possible to establish IPsec Security Associations, eitherthrough their direct provisioning or through the standardIKEv2 protocol. We provide the first Open Source implementationof the ACE IPsec profile for the Contiki OS and testit on the resource-constrained Zolertia Firefly platform. Ourexperimental performance evaluation confirms that the IPsecprofile and its operating modes are affordable and deployablealso on constrained IoT platforms.
The complexity of systems continues to increase rapidly, especially due to the multi-level integration of subsystems from different domains into cyber-physical systems. This results in special challenges for the efficient verification and validation (V&V) of these systems with regard to their requirements and properties. In order to tackle the new challenges and improve the quality assurance processes, the V&V workflows have to be documented and analyzed. In this paper, a novel approach for the workflow modelling of V&V activities is presented. The generic approach is tailorable to different industrial domains and their specific constraints, V&V methods, and toolchains. The outcomes comprise a dedicated modelling notation (VVML) and tool-support using the modelling framework Enterprise Architect for the efficient documentation and implementation of workflows in the use cases. The solution enables the design of re-usable workflow assets such as V&V activities and artifacts that are exchanged between workflows. This work is part of the large scale European research project VALU3S that deals with the improvement and evaluation of V&V processes in different technical domains, focusing on safety, cybersecurity, and privacy properties.
Cybersecurity is an important concern in systems-of-systems (SoS), where the effects of cyber incidents, whether deliberate attacks or unintentional mistakes, can propagate from an individual constituent system (CS) throughout the entire SoS. Unfortunately, the security of an SoS cannot be guaranteed by separately addressing the security of each CS. Security must also be addressed at the SoS level. This paper reviews some of the most prominent cybersecurity risks within the SoS research field and combines this with the cyber and information security economics perspective. This sets the scene for a structured assessment of how various cyber risks can be addressed in different SoS architectures. More precisely, the paper discusses the effectiveness and appropriateness of five cybersecurity policy options in each of the four assessed SoS archetypes and concludes that cybersecurity risks should be addressed using both traditional design-focused and more novel policy-oriented tools.
Unlike other fields of computing and communications, low-power wireless networking is plagued by one major issue: the absence of a well-defined, agreed-upon yardstick to compare the performance of systems, namely, a benchmark. We argue that this situation may eventually represent a hampering factor for a technology expected to be key in the Internet of Things (IoT) and Cyber-physical Systems (CPS). This paper describes a recent initiative to remedy this situation, seeking to enlarge the participation from the community.
The assessment of perceived quality based on psychophysiological methods recently gained attraction as it potentiallyovercomes certain flaws of psychophysical approaches. Although studies report promising results, it is not possible toarrive at decisive and comparable conclusions that recommend the use of one or another method for a specific applicationor research question. The video quality expert group started a project on psychophysiological quality assessment to studythese novel approaches and to develop a test plan that enables more systematic research. This test plan comprises of a specificallydesigned set of quality annotated video sequences, suggestions for psychophysiological methods to be studied inquality assessment, and recommendations for the documentation and publications of test results. The test plan is presentedin this article.
In this study, we investigate a VR simulator of a forestry crane used for loading logs onto a truck, mainly looking at Quality of Experience (QoE) aspects that may be relevant for task completion, but also whether there are any discomfort related symptoms experienced during task execution. The QoE test has been designed to capture both the general subjective experience of using the simulator and to study task completion rate. Moreover, a specific focus has been to study the effects of latency on the subjective experience, with regards both to delays in the crane control interface as well as lag in the visual scene rendering in the head mounted display (HMD). Two larger formal subjective studies have been performed: one with the VR-system as it is and one where we have added controlled delay to the display update and to the joystick signals. The baseline study shows that most people are more or less happy with the VR-system and that it does not have strong effects on any symptoms as listed in the Simulator Sickness Questionnaire (SSQ). In the delay study we found significant effects on Comfort Quality and Immersion Quality for higher Display delay (30 ms), but very small impact of joystick delay. Furthermore, the Display delay had strong influence on the symptoms in the SSQ, and causing test subjects to decide not to continue with the complete experiments. We found that this was especially connected to the longer added Display delays (≥ 20 ms).
This report investigates the problem of where to place computation workload in an edge-cloud network topology considering the trade-off between the location specific cost of computation and data communication.
This article investigates the problem of where to place the computation workload in an edge-cloud network topology considering the trade-off between the location-specific cost of computation and data communication. For this purpose, a Monte Carlo simulation model is defined that accounts for different workload types, their distribution across time and location, as well as correlation structure. Results confirm and quantify the intuition that optimization can be achieved by distributing a part of cloud computation to make efficient use of resources in an edge data center network, with operational energy savings of 4â6% and up to 50% reduction in its claim for cloud capacity.
Mobile networks are facing an unprecedented demand for high-speed connectivity originating from novel mobile applications and services and, in general, from the adoption curve of mobile devices. However, coping with the service requirements imposed by current and future applications and services is very difficult since mobile networks are becoming progressively more heterogeneous and more complex. In this context, a promising approach is the adoption of novel network automation solutions and, in particular, of zero-touch management techniques. In this work, we refer to zero-touch management as a fully autonomous network management solution with human oversight. This survey sits at the crossroad between zero-touch management and mobile and wireless network research, effectively bridging a gap in terms of literature review between the two domains. In this paper, we first provide a taxonomy of network management solutions. We then discuss the relevant state-of-the-art on autonomous mobile networks. The concept of zero-touch management and the associated standardization efforts are then introduced. The survey continues with a review of the most important technological enablers for zero-touch management. The network automation solutions from the RAN to the core network, including end-to-end aspects such as security, are then surveyed. Finally, we close this article with the current challenges and research directions.
The use of a micro-ring resonator (MRR) to enhance the modulation extinction ratio and dispersion tolerance of a directly modulated laser (DML) is experimentally investigated with a bit rate of 25 Gb/s as proposed for the next generation data center communications. The investigated system combines a 11-GHz 1.55-m directly modulated hybrid III-V/SOI DFB laser realized by bonding III-V materials (InGaAlAs) on a silicon-on-insulator (SOI) wafer and a silicon MRR also fabricated on SOI. Such a transmitter enables error-free transmission (BER< 10 -9 )at 25 Gb/s data rate over 2.5-km SSMF without dispersion compensation nor forward error correction (FEC). As both laser and MRR are fabricated on the SOI platform, they could be combined into a single device with enhanced performance, thus providing a cost-effective transmitter for short reach applications.
As our dependence on automated systems grows, so does the need for guaranteeing their safety, cybersecurity, and privacy (SCP). Dedicated methods for verification and validation (V&V) must be used to this end and it is necessary that the methods and their characteristics can be clearly differentiated. This can be achieved via method classifications. However, we have experienced that existing classifications are not suitable to categorise V&V methods for SCP of automated systems. They do not pay enough attention to the distinguishing characteristics of this system type and of these quality concerns. As a solution, we present a new classification developed in the scope of a large-scale industry-academia project. The classification considers both the method type, e.g., testing, and the concern addressed, e.g., safety. Over 70 people have successfully used the classification on 53 methods. We argue that the classification is a more suitable means to categorise V&V methods for SCP of automated systems and that it can help other researchers and practitioners.
In this paper, we investigate how different viewingpositions affect a user’s Quality of Experience (QoE) and performancein an immersive telepresence system. A QoE experimenthas been conducted with 27 participants to assess the generalsubjective experience and the performance of remotely operatinga toy excavator. Two view positions have been tested, an overheadand a ground-level view, respectively, which encourage relianceon stereoscopic depth cues to different extents for accurate operation.Results demonstrate a significant difference between groundand overhead views: the ground view increased the perceiveddifficulty of the task, whereas the overhead view increased theperceived accomplishment as well as the objective performanceof the task. The perceived helpfulness of the overhead view wasalso significant according to the participants.
An important task of testing a telecommunication protocol consists in analysing logs. The goal of log analysis is to check that the timing and the content of transmitted messages comply with specification. In order to perform such checks, protocols can be described using a constraint modelling language. In this paper we focus on a complex protocol where some messages can be delayed. Simply introducing variables for possible delays for all messages in the constraint model can drastically increase the complexity of the problem. However, some delays can be calculated, but this calculation is difficult to do by hand and to justify. We present an industrial application of the Coq proof assistant to prove a property of a 4G protocol and validate a constraint model. By using interactive theorem proving we derived constraints for message delays of the protocol and found missing constraints in the initial model.
With the growing capabilities of autonomous vehicles, there is a higher demand for sophisticated and pragmatic quality assurance approaches for machine learning-enabled systems in the automotive AI context. The use of simulation-based prototyping platforms provides the possibility for early-stage testing, enabling inexpensive testing and the ability to capture critical corner-case test scenarios. Simulation-based testing properly complements conventional on-road testing. However, due to the large space of test input parameters in these systems, the efficient generation of effective test scenarios leading to the unveiling of failures is a challenge. This paper presents a study on testing pedestrian detection and emergency braking system of the Baidu Apollo autonomous driving platform within the SVL simulator. We propose an evolutionary automated test generation technique that generates failure-revealing scenarios for Apollo in the SVL environment. Our approach models the input space using a generic and flexible data structure and benefits a multi-criteria safety-based heuristic for the objective function targeted for optimization. This paper presents the results of our proposed test generation technique in the 2021 IEEE Autonomous Driving AI Test Challenge. In order to demonstrate the efficiency and effectiveness of our approach, we also report the results from a baseline random generation technique. Our evaluation shows that the proposed evolutionary test case generator is more effective at generating failure-revealing test cases and provides higher diversity between the generated failures than the random baseline.
Reasoning about safety, security, and other dependability attributes of autonomous systems is a challenge that needs to be addressed before the adoption of such systems in day-to-day life. Formal methods is a class of methods that mathematically reason about a system’s behavior. Thus, a correctness proof is sufficient to conclude the system’s dependability. However, these methods are usually applied to abstract models of the system, which might not fully represent the actual system. Fault injection, on the other hand, is a testing method to evaluate the dependability of systems. However, the amount of testing required to evaluate the system is rather large and often a problem. This vision paper introduces formal fault injection, a fusion of these two techniques throughout the development lifecycle to enhance the dependability of autonomous systems. We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection. By forging stronger ties between the two fields, we pave the way for developing safe and dependable autonomous systems. This paper delves into the integration’s potential and outlines future research avenues, addressing open challenges along the way.
As society increasingly relies on safety- and security- critical systems, the need for confirming their dependability becomes essential. Adequate V&V (verification and validation) methods must be employed, e.g., for system testing. When selecting and using the methods, it is important to analyze their possible gaps and limitations, such as scalability issues. However, and as we have experienced, common, explicitly defined criteria are seldom used for such analyses. This results in analyses that consider different aspects and to a different extent, hindering their comparison and thus the comparison of the V&V methods. As a solution, we present a set of criteria for the analysis of gaps and limitations of V&V methods for safety- and security-critical systems. The criteria have been identified in the scope of the VALU3S project. Sixty-two people from 33 organizations agreed upon the use of nine criteria: functionality, accuracy, scalability, deployment, learning curve, automation, reference environment, cost, and standards. Their use led to more homogeneous and more detailed analyses when compared to similar previous efforts. We argue that the proposed criteria can be helpful to others when having to deal with similar activities.
Model-based test design is increasingly being applied in practice and studied in research. Model-based testing (MBT) exploits abstract models of the software behaviour to generate abstract tests, which are then transformed into concrete tests ready to run on the code. Given that abstract tests are designed to cover models but are run on code (after transformation), the effectiveness of MBT is dependent on whether model coverage also ensures coverage of key functional code. In this article, we investigate how MBT approaches generate tests from model specifications and how the coverage of tests designed strictly based on the model translates to code coverage. We used snowballing to conduct a systematic literature review. We started with three primary studies, which we refer to as the initial seeds. At the end of our search iterations, we analysed 30 studies that helped answer our research questions. More specifically, this article characterizes how test sets generated at the model level are mapped and applied to the source code level, discusses how tests are generated from the model specifications, analyses how the test coverage of models relates to the test coverage of the code when the same test set is executed and identifies the technologies and software development tasks that are on focus in the selected studies. Finally, we identify common characteristics and limitations that impact the research and practice of MBT: (i) some studies did not fully describe how tools transform abstract tests into concrete tests, (ii) some studies overlooked the computational cost of model-based approaches and (iii) some studies found evidence that bears out a robust correlation between decision coverage at the model level and branch coverage at the code level. We also noted that most primary studies omitted essential details about the experiments.
Cybersecurity incidents have been growing both in number and associated impact, as a result from society’s increased dependency in information and communication technologies - accelerated by the recent pandemic. In particular, IoT. technologies, which enable significant flexibility and cost-efficiency, but are also associated to more relaxed security mechanisms, have been quickly adopted across all sectors of the society, including critical infrastructures (e.g. smart grids) and services (e.g. eHealth). Gaps such as high dependence on 3rd party IT suppliers and device manufacturers increase the importance of trustworthy and secure solutions for future digital services. This paper presents ARCADIAN-IoT, a framework aimed at holistically enabling trust, security, privacy and recovery in IoT systems, and enabling a Chain of Trust between the different IoT entities (persons, objects and services). It builds on features such as federated AI for effective and privacy-preserving cybersecurity, distributed ledger technologies for decentralized management of trust, or transparent, user-controllable and decentralized privacy. © 2022, The Author(s)
Today, embedded systems across industrial domains (e.g., avionics,automotive) are representatives of software-intensive systems with increasingreliance on software and growing complexity. It has become critically importantto verify software in a time, resource and cost effective manner. Furthermore,industrial domains are striving to comply with the requirements of relevantsafety standards. This paper proposes a novel workflow along with tool supportto evaluate robustness of software in model-based development environment,assuming different abstraction levels of representing software. We then showthe effectiveness of our technique, on a brake-by-wire application, byperforming back-to-back fault injection testing between two differentabstraction levels using MODIFI for the Simulink model and GOOFI-2 for thegenerated code running on the target microcontroller. Our proposed method andtool support facilitates not only verifying software during early phases of thedevelopment lifecycle but also fulfilling back-to-back testing requirements of ISO 26262 [1] when using model-based development.
Fault- and attack injection are techniques used to measure dependability attributes of computer systems. An important property of such injectors is their efficiency that deals with the time and effort needed to explore the target system’s fault- or attack space. As this space is generally very large, techniques such as pre-injection analyses are used to effectively explore the space. In this paper, we study two such techniques that have been proposed in the past, namely inject-on-read and inject-on-write. Moreover, we propose a new technique called error space pruning of signals and evaluate its efficiency in reducing the space needed to be explored by fault and attack injection experiments. We implemented and integrated these techniques into MODIFI, a model-implemented fault and attack injector, which has been effectively used in the past to evaluate Simulink models in the presence of faults and attacks. To the best of our knowledge, we are the first to integrate these pre-injection analysis techniques into an injector that injects faults and attacks into Simulink models.The results of our evaluation on 11 vehicular Simulink models show that the error space pruning of signals reduce the attack space by about 30–43%, hence allowing the attack space to be exploited by fewer number of attack injection experiments. Using MODIFI, we then performed attack injection experiments on two of these vehicular Simulink models, a comfort control model and a brake-by-wire model, while elaborating on the results obtained
Advances in interconnectivity between vehicles, vehicle fleets and infrastructures led to opportunities of interoperability and systems-of-systems (SoS). Several challenges emerge that put on requirements on dealing with the vast amount of data generated by modern vehicles and their actuation with higher-level commands and controls. They have naturally created opportunities for the development of sophisticated, powerful, generic platforms to support ingestion, storage, processing, management, operation and orchestration of data and processes in SoS. A prominent example is the scenario of vehicle fleets and more precisely, on how to engineer the SoS so that the collaboration among various constituent systems will achieve the SoS goals. Several challenges cap the extent of opportunities, such as determining the business and functional requirements, as well as technical: constructing and operating an independent, scalable, and flexible platform ensuring e.g., privacy and accountability. In this work, we discuss these concerns and challenges from a technical perspective.
The Constrained Application Protocol (CoAP) is a standard communication protocol for resource-constrained devices in the Internet of Things (IoT). Many IoT deployments require proxies to support asynchronous communication between edge devices and the back-end. This allows (non-trusted) proxies to access sensitive parts of CoAP messages. Object Security for Constrained RESTful Environments (OSCORE) is a recent standard protocol that provides end-to-end security for CoAP messages at the application layer. Unlike the commonly used standard Datagram Transport Layer Security (DTLS), OSCORE efficiently provides selective integrity protection and encryption on different parts of CoAP messages. Thus, OSCORE enables end-to-end security through intermediary (non-trusted) proxies, while still allowing them to perform their expected services, with considerable security and privacy improvements.
To assess whether these security features consume too much of the limited resources available on a constrained device, we have implemented OSCORE (the implementation is available as open-source), and evaluated its efficiency. This paper provides a comprehensive, comparative and experimental performance evaluation of OSCORE on real resource-constrained IoT devices, using the operating system Contiki-NG as IoT software platform. In particular, we experimentally evaluated the efficiency of our OSCORE implementation on resource-constrained devices running Contiki-NG, in comparison with the DTLS implementation TinyDTLS maintained by the Eclipse Foundation. The evaluation results show that our OSCORE implementation displays moderately better performance than TinyDTLS, in terms of per-message network overhead, memory usage, message round-trip time and energy efficiency, thus providing the security improvements of OSCORE with no additional performance penalty.
Performance testing with the aim of generating an efficient and effective workload to identify performance issues is challenging. Many of the automated approaches mainly rely on analyzing system models, source code, or extracting the usage pattern of the system during the execution. However, such information and artifacts are not always available. Moreover, all the transactions within a generated workload do not impact the performance of the system the same way, a finely tuned workload could accomplish the test objective in an efficient way. Model-free reinforcement learning is widely used for finding the optimal behavior to accomplish an objective in many decision-making problems without relying on a model of the system. This paper proposes that if the optimal policy (way) for generating test workload to meet a test objective can be learned by a test agent, then efficient test automation would be possible without relying on system models or source code. We present a self-adaptive reinforcement learning-driven load testing agent, RELOAD, that learns the optimal policy for test workload generation and generates an effective workload efficiently to meet the test objective. Once the agent learns the optimal policy, it can reuse the learned policy in subsequent testing activities. Our experiments show that the proposed intelligent load test agent can accomplish the test objective with lower test cost compared to common load testing procedures, and results in higher test efficiency.
Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models. © 2021, The Author(s).
The automotive industry is moving towards increased automation, where features such as automated driving systems typically include machine learning (ML), e.g. in the perception system. [Question/Problem] Ensuring safety for systems partly relying on ML is challenging. Different approaches and frameworks have been proposed, typically where the developer must define quantitative and/or qualitative acceptance criteria, and ensure the criteria are fulfilled using different methods to improve e.g., design, robustness and error detection. However, there is still a knowledge gap between quality methods and metrics employed in the ML domain and how such methods can contribute to satisfying the vehicle level safety requirements. In this paper, we argue the need for connecting available ML quality methods and metrics to the safety lifecycle and explicitly show their contribution to safety. In particular, we analyse Out-of-Distribution (OoD) detection, e.g., the frequency of novelty detection, and show its potential for multiple safety-related purposes. I.e., as (a) an acceptance criterion contributing to the decision if the software fulfills the safety requirements and hence is ready-for-release, (b) in operational design domain selection and expansion by including novelty samples into the training/development loop, and (c) as a run-time measure, e.g., if there is a sequence of novel samples, the vehicle should consider reaching a minimal risk condition. [Contribution] This paper describes the possibility to use OoD detection as a safety measure, and the potential contributions in different stages of the safety lifecycle. © 2023, The Author(s)
Process compliance with relevant regulations and de-facto standards is a mandatory requirement for certifying critical systems. However, it is often carried out manually, and therefore perceived as complex and labour-intensive. Ontology-based Natural Language Processing (NLP) provides an efficient support for compliance management with critical software system engineering standards. This, however, has not been considered in the literature. Accordingly, the approach presented in this paper focuses on ontology-based NLP for compliance management of software engineering processes with standard documents. In the developed ontology, the process concerns, such as stakeholders, tasks and work products are captured for better interpretation. The rules are created for extracting and structuring information, in which both syntactic features (captured using NLP tasks) and semantic features (captured using ontology) are encoded. During the planning phase, we supported the generation of requirements, process models and compliance mappings in Eclipse Process Framework (EPF) Composer. In the context of reverse compliance, the gaps with standard documents are detected, potential measures for their resolution are provided, and adaptions are made after the process engineer approval. The applicability of the proposed approach is demonstrated by processing ECSS-E-ST-40C, a space software engineering standard, generating models and mappings, as well as reverse compliance management of extended process model.
Context: Container-based virtualization is gaining popularity in different domains, as it supports continuous development and improves the efficiency and reliability of run-time environments. Problem: Different techniques are proposed for monitoring the security of containers. However, there are no guidelines supporting the selection of suitable techniques for the tasks at hand. Objective: We aim to support the selection and design of techniques for monitoring container-based virtualization environments. Approach:: First, we review the literature and identify techniques for monitoring containerized environments. Second, we classify these techniques according to a set of categories, such as technical characteristic, applicability, effectiveness, and evaluation. We further detail the pros and cons that are associated with each of the identified techniques. Result: As a result, we present CONSERVE, a multi-dimensional decision support framework for an informed and optimal selection of a suitable set of container monitoring techniques to be implemented in different application domains. Evaluation: A mix of eighteen researchers and practitioners evaluated the ease of use, understandability, usefulness, efficiency, applicability, and completeness of the framework. The evaluation shows a high level of interest, and points out to potential benefits. © 2021 The Authors
The vulnerabilities in deployed IoT devices are a threat to critical infrastructure and user privacy. There is ample ongoing research and efforts to produce devices that are secure-by-design. However, these efforts are still far from translation into actual deployments. To address this, worldwide efforts towards IoT device and software certification have accelerated as a potential solution, including UK’s IoT assurance program, EU Cybersecurity Act and the US executive order 14028. In EU, the Cybersecurity Act was launched in 2019 which initiated the European cybersecurity certification framework for Internet and Communications Technology (ICT). The heterogeneity of the IoT landscape with devices ranging from industrial to consumer, makes it challenging to incorporate IoT devices in the certification framework or introduce a European cybersecurity certification scheme solely for IoT. This paper analyses the cybersecurity certification prospects for IoT devices and also places article 54 of the EU Cybersecurity Act in an international perspective. We conducted a comparative study of existing IoT certification schemes to identify potential gaps and extract requirements of a candidate IoT device security certification scheme. We also propose an approach that can be used as a template to instantiate an EU cybersecurity certification scheme for IoT devices. In the proposed template, we identify IoT-critical elements from the article 54 of the Cybersecurity Act. We also evaluate the proposed template using the ENISA qualification system for cybersecurity certification schemes and show its qualification on all criteria.
The Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST) has provided a venue for researchers and industry members alike to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated testing. Up until now it has run 13 editions since 2009. The 13th edition of the A-TEST workshop has been performed as an in-person workshop in Singapore during 17 to 18 of November, 2022. This edition of the A-TEST workshop was co-located with ESEC/FSE 2022 conference.
As vehicles become more and more connected with their surroundings and utilize an increasing number of services, they also become more exposed to threats as the attack surface increases. With increasing attack surfaces and continuing challenges of eliminating vulnerabilities, vehicles need to be designed to work even under malicious activities, i.e., under attacks. In this paper, we present a resilience framework that integrates analysis of safety and cybersecurity mechanisms. We also integrate resilience for safety and cybersecurity into the fault – error – failure chain. The framework is useful for analyzing the propagation of faults and attacks between different system layers. This facilitates identification of adequate resilience mechanisms at different system layers as well as deriving suitable test cases for verification and validation of system resilience using fault and attack injection.
Memory-augmented neural networks enhance a neural network with an external key-value (KV) memory whose complexity is typically dominated by the number of support vectors in the key memory. We propose a generalized KV memory that decouples its dimension from the number of support vectors by introducing a free parameter that can arbitrarily add or remove redundancy to the key memory representation. In effect, it provides an additional degree of freedom to flexibly control the tradeoff between robustness and the resources required to store and compute the generalized KV memory. This is particularly useful for realizing the key memory on in-memory computing hardware where it exploits nonideal, but extremely efficient nonvolatile memory devices for dense storage and computation. Experimental results show that adapting this parameter on demand effectively mitigates up to 44% nonidealities, at equal accuracy and number of devices, without any need for neural network retraining.
In the Internet of things (IoT), traffic often goes via middleboxes, such as brokers or virtual private network (VPN) gateways, thereby increasing the trusted computing base (TCB) of IoT applications considerably. A remedy is offered by the application layer security protocol Object Security for Constrained RESTful Environments (OSCORE). It allows for basic middlebox functions without breaking end-to-end security. With OSCORE, however, traffic is routed to IoT devices largely unfiltered. This opens up avenues for remote denial-of-sleep attacks where a remote attacker injects OSCORE messages so as to cause IoT devices to consume more energy. The state-of-the-art defense is to let a trusted middlebox perform authenticity, freshness, and per-client rate limitation checks before forwarding OSCORE messages to IoT devices, but this solution inflates the TCB and hence negates the idea behind OSCORE. In this paper, we suggest filtering OSCORE messages in a RISC-V-based trusted execution environment (TEE) running on a middlebox that remains widely untrusted. To realize this approach, we also put forward the tiny remote attestation protocol (TRAP), as well as a Layer 2 integration that prevents attackers from bypassing our TEE. Experimental results show our remote denial-of-sleep defense to be lightweight enough for low-end IoT devices and to keep the TCB small. © 2023, The Author(s)
The automatic detection and classification of stance (e.g., certainty or agreement) in text data using natural language processing and machine-learning methods creates an opportunity to gain insight into the speakers' attitudes toward their own and other people's utterances. However, identifying stance in text presents many challenges related to training data collection and classifier training. To facilitate the entire process of training a stance classifier, we propose a visual analytics approach, called ALVA, for text data annotation and visualization. ALVA's interplay with the stance classifier follows an active learning strategy to select suitable candidate utterances for manual annotaion. Our approach supports annotation process management and provides the annotators with a clean user interface for labeling utterances with multiple stance categories. ALVA also contains a visualization method to help analysts of the annotation and training process gain a better understanding of the categories used by the annotators. The visualization uses a novel visual representation, called CatCombos, which groups individual annotation items by the combination of stance categories. Additionally, our system makes a visualization of a vector space model available that is itself based on utterances. ALVA is already being used by our domain experts in linguistics and computational linguistics to improve the understanding of stance phenomena and to build a st ance classifier for applications such as social media monitoring.
Safety-critical systems are required to comply withsafety standards as well as security and privacy standards.In order to provide insights into how practitioners apply thestandards on safety, security or privacy (Sa/Se/Pr), as well ashow they employ Sa/Se/Pr analysis methodologies and softwaretools to meet such criteria, we conducted a questionnaire-basedsurvey. This paper summarizes our major analysis results of thereceived responses.
There is an evident need to complement embedded critical control logic with AI inference, but today’s AI-capable hardware, software, and processes are primarily targeted towards the needs of cloud-centric actors. Telecom and defense airspace industries, which make heavy use of specialized hardware, face the challenge of manually hand-tuning AI workloads and hardware, presenting an unprecedented cost and complexity due to the diversity and sheer number of deployed instances. Furthermore, embedded AI functionality must not adversely affect real-time and safety requirements of the critical business logic. To address this, end-to-end AI pipelines for critical platforms are needed to automate the adaption of networks to fit into resource-constrained devices under critical and real-time constraints, while remaining interoperable with de-facto standard AI tools and frameworks used in the cloud. We present two industrial applications where such solutions are needed to bring AI to critical and resource-constrained hardware, and a generalized end-to-end AI pipeline that addresses these needs. Crucial steps to realize it are taken in the industry-academia collaborative FASTER-AI project. © 2024 EDAA.
This paper presents CarFASE, an open-source carla-based fault and attack simulation engine that is used to test and evaluate the behavior of autonomous driving stacks in the presence of faults and attacks. Carla is a highly customizable and adaptable simulator for autonomous driving research. In this paper, we demonstrate the application of CarFASE by running fault injection experiments on OpenPilot, an open-source advanced driver assistance system designed to provide a suite of features such as lane keeping, adaptive cruise control, and forward collision warning to enhance the driving experience. A braking scenario is used to study the behavior of OpenPilot in the presence of brightness and salt&pepper faults. The results demonstrate the usefulness of the tool in evaluating the safety attributes of autonomous driving systems in a safe and controlled environment.
In this work, we evaluate the safety of a platoon offour vehicles under jamming attacks. The platooning applicationis provided by Plexe-veins, which is a cooperative drivingframework, and the vehicles in the platoon are equipped withcooperative adaptive cruise control controllers to represent thevehicles’ behavior. The jamming attacks investigated are modeledby extending ComFASE (a Communication Fault and AttackSimulation Engine) and represent three real-world attacks,namely, destructive interference, barrage jamming, and deceptivejamming. The attacks are injected in the physical layer of theIEEE 802.11p communication protocol simulated in Veins (avehicular network simulator). To evaluate the safety implicationsof the injected attacks, the experimental results are classifiedby using the deceleration profiles and collision incidents of thevehicles. The results of our experiments show that jammingattacks on the communication can jeopardize vehicle safety,causing emergency braking and collision incidents. Moreover,we describe the impact of different attack injection parameters(such as, attack start time, attack duration and attack value) onthe behavior of the vehicles subjected to the attacks.
Embedded electronic systems used in vehicles are becoming more exposed and thus vulnerable to different types of faults and cybersecurity attacks. Examples of these systems are advanced driver assistance systems (ADAS) used in vehicles with different levels of automation. Failures in these systems could have severe consequences, such as loss of lives and environmental damages. Therefore, these systems should be thoroughly evaluated during different stages of product development. An effective way of evaluating these systems is through the injection of faults and monitoring their impacts on these systems. In this paper, we present SUFI, a simulation-based fault injector that is capable of injecting faults into ADAS features simulated in SUMO (simulation of urban mobility). Simulation-based fault injection is usually used at early stages of product development, especially when the target hardware is not yet available. Using SUFI we target car-following and lane-changing features of ADAS modelled in SUMO. The results of the fault injection experiments show the effectiveness of SUFI in revealing the weaknesses of these models when targeted by faults and attacks.
Embedded electronic systems used in vehicles are becoming more exposed and thus vulnerable to different types of faults and cybersecurity attacks. Examples of these systems are advanced driver assistance systems (ADAS) used in vehicles with different levels of automation. Failures in these systems could have severe consequences, such as loss of lives and environmental damages. Therefore, these systems should be thoroughly evaluated during different stages of product development. An effective way of evaluating these systems is through the injection of faults and monitoring their impacts on these systems. In this paper, we present SUFI, a simulation-based fault injector that is capable of injecting faults into ADAS features simulated in SUMO (simulation of urban mobility) and analyse the impact of the injected faults on the entire traffic. Simulation-based fault injection is usually used at early stages of product development, especially when the target hardware is not yet available. Using SUFI we target car-following and lane-changing features of ADAS modelled in SUMO. The results of the fault injection experiments show the effectiveness of SUFI in revealing the weaknesses of these models when targeted by faults and attacks.
A remotely operated road vehicle (RORV) refers to a vehicle operated wirelessly from a remote location. In this paper, we report results from an evaluation of two safety mechanisms: safe braking and disconnection. These safety mechanisms are included in the control software for RORV developed by Roboauto, an intelligent mobility solutions provider. The safety mechanisms monitor the communication system to detect packet transmission delays, lost messages, and outages caused by naturally occurring interference as well as denial-of-service (DoS) attacks. When the delay in the communication channel exceeds certain threshold values, the safety mechanisms are to initiate control actions to reduce the vehicle speed or stop the affected vehicle safely as soon as possible. To evaluate the effectiveness of the safety mechanisms, we exposed the vehicle control software to various communication failures using a software-in-the-loop (SIL) testing environment developed specifically for this study. Our results show that the safety mechanisms behaved correctly for a vast majority of the simulated communication failures. However, in a few cases, we noted that the safety mechanisms were triggered incorrectly, either too early or too late, according to the system specification.
This paper presents ComFASE, a communication fault and attack simulation engine. ComFASE is used to identify and evaluate potentially dangerous behaviours of interconnected automated vehicles in the presence of faults and attacks in wireless vehicular networks. ComFASE is built on top of OMNET++ (a network simulator) and integrates SUMO (a traffic simulator) and Veins (a vehicular network simulator). The tool is flexible in modelling different types of faults and attacks and can be effectively used to study the interplay between safety and cybersecurity attributes by injecting cybersecurity attacks and evaluating their safety implications. To demonstrate the tool, we present results from a series of simulation experiments, where we injected delay and denial-of-service attacks on wireless messages exchanged between vehicles in a platooning application. The results show how different variants of attacks influence the platooning system in terms of collision incidents.
This paper presents our initial contributions toward defining security benchmarks for simulation-based assessment of Cooperative Driving Automation (CDA) applications. A security benchmark is a process or procedure for assessing and validating a system’s ability to achieve its operational objectives in the presence of specific security attacks. This work lays the groundwork for developing security benchmarks that assess the robustness of CDA applications against jamming attacks. The driving scenario and the attack model are the core components of our proposed security benchmark. We used two scenarios braking and sinusoidal as a stimulus for evaluating the robustness of a platooning application modeled in a simulation framework called Plexe. The platooning application is equipped with a Cooperative Adaptive Cruise Control (CACC) controller. We injected barrage jamming attacks into the physical layer of the wireless communication system modeled by the IEEE 802.11p protocol. We demonstrate that jamming attacks can compromise safety, leading to emergency braking and collision incidents among platooning vehicles. Our findings also indicate that the severity of jamming attacks varies with the driving scenario, with the most severe impacts (i.e., collisions) occurring when the attack is injected during vehicle acceleration.