Cellulose nanocrystals (CNCs) and 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO)- oxidized cellulose nanofibrils (T-CNFs) were tested as enhanced oil recovery (EOR) agents through core floods and microfluidic experiments. Both particles were mixed with low salinity water (LSW). The core floods were grouped into three parts based on the research objectives. In Part 1, secondary core flood using CNCs was compared to regular water flooding at fixed conditions, by reusing the same core plug to maintain the same pore structure. CNCs produced 5.8% of original oil in place (OOIP) more oil than LSW. For Part 2, the effect of injection scheme, temperature, and rock wettability was investigated using CNCs. The same trend was observed for the secondary floods, with CNCs performing better than their parallel experiment using LSW. Furthermore, the particles seemed to perform better under mixed-wet conditions. Additional oil (2.9–15.7% of OOIP) was produced when CNCs were injected as a tertiary EOR agent, with more incremental oil produced at high temperature. In the final part, the effect of particle type was studied. T-CNFs produced significantly more oil compared to CNCs. However, the injection of T-CNF particles resulted in a steep increase in pressure, which never stabilized. Furthermore, a filter cake was observed at the core face after the experiment was completed. Microfluidic experiments showed that both T-CNF and CNC nanofluids led to a better sweep efficiency compared to low salinity water flooding. T- CNF particles showed the ability to enhance the oil recovery by breaking up events and reducing the trapping efficiency of the porous medium. A higher flow rate resulted in lower oil recovery factors and higher remaining oil connectivity. Contact angle and interfacial tension measurements were conducted to understand the oil recovery mechanisms. CNCs altered the interfacial tension the most, while T-CNFs had the largest effect on the contact angle. However, the changes were not significant enough for them to be considered primary EOR mechanisms.
The use of wood-derived cellulose nanofibrils (CNFs) or galactoglucomannans (GGM) for emulsion stabilization may be a way to obtain new environmentally friendly emulsifiers. Both have previously been shown to act as emulsifiers, offering physical, and in the case of GGM, oxidative stability to the emulsions. Oil-in-water emulsions were prepared using highly charged (1352 ± 5 µmol/g) CNFs prepared by TEMPO-mediated oxidation, or a coarser commercial CNF, less charged (≈ 70 µmol/g) quality (Exilva forte), and the physical emulsion stability was evaluated by use of droplet size distributions, micrographs and visual appearance. The highly charged, finely fibrillated CNFs stabilized the emulsions more effectively than the coarser, lower charged CNFs, probably due to higher electrostatic repulsions between the fibrils, and a higher surface coverage of the oil droplets due to thinner fibrils. At a constant CNF/oil ratio, the lowest CNF and oil concentration of 0.01 wt % CNFs and 5 wt % oil gave the most stable emulsion, with good stability toward coalescence, but not towards creaming. GGM (0.5 or 1.0 wt %) stabilized emulsions (5 wt % oil) showed no creaming behavior, but a clear bimodal distribution with some destabilization over the storage time of 1 month. Combinations of CNFs and GGM for stabilization of emulsions with 5 wt % oil, provided good stability towards creaming and a slower emulsion destabilization than for GGM alone. GGM could also improve the stability towards oxidation by delaying the initiation of lipid oxidation. Use of CNFs and combinations of GGM and CNFs can thus be away to obtain stable emulsions, such as mayonnaise and beverage emulsions. © 2021, The Author(s).
Denne rapporten presenterer resultater fra et prosjekt som er utført av Fire Research and Innovation Centre - FRIC fra 2020 til 2022. Første versjon av rapporten ble publisert på engelsk i mars 2022. Denne norske versjonen er oversatt av SINTEF Digital og RISE Fire Research i samarbeid. En spesiell takk til Caroline Kristensen for arbeidet med oversettelsen. Rapporten er også oppdatert på enkelte punkter, uten at fokus og konklusjoner skal være endret.
Learning from incidents is widely accepted as a core part of safety management. This is also true for fires – however few fires in Norway are investigated. Fires are interesting incidents conceptually due to their potential of devastating outcomes on material and human lives and because they happen across all sectors and industries, businesses, and homes. In Norway, several different actors play a role in investigating and learning from fires, from the fire rescue services to directorates and Non-Governmental Organisations. The present study seeks to understand the preconditions for learning from fires in Norway, with emphasis on the formal actors that play a role in preventing and mitigating fires. Methodologically, the study is based on qualitative interviews conducted with relevant actors from first responders, authorities, and other sectors. We found that there are structural, cultural, technological, and relational aspects that seem to influence learning from fires in Norway. The results were analyzed using thematic analysis and the Pentagon model framework. The findings are discussed in relation to theories from organisational learning and learning from incidents.
Experiences regarding personal protection water mist systems installed in dwellings. Personal protection water mist systems can produce a water mist that can cool down and limit a fire in a small area in a dwelling. The system is equipped with sensitive detectors which can activate the system in the early stages of the fire and limit the fire spread, and in some cases extinguish the fire. This gives more time for evacuation, which can be especially important for vulnerable people with risk factors, like impaired cognitive and physical functioning. The goal of this study has been to map the experiences in Norway regarding personal protection water mist systems, considering how the municipalities have experienced the work related to the systems and whether the systems have activated and saved lives. This will shed light upon whether mobile water mist systems are appropriate measures for vulnerable people in the society, and the risk factors that determine whether the measure is appropriate or not. This study has used literature studies, questionnaires, and interviews to map the experiences of personal protection water mist systems in Norway. The results showed that personal protection water mist systems installed in Norwegian dwellings have been activated in connection with fire outbreaks, and thus limited or extinguished the fire. This has saved lives on several occasions and reduced the damage potential. There are many people who have risk factors that make it appropriate to install a mobile water mist system in their home, but there are also exceptions. The risk factors that indicate that it is beneficial to install mobile water mist systems in Norwegian dwellings are - Impaired cognitive abilities - Impaired physical abilities - Drug and alcohol problems - Smoking - Living alone The systems are particularly suitable when several of the risk factors are present at the same time. It was also shown that personal protection water mist systems are not suitable for mobile people who spend time in several places in the home and are therefore often outside the system's coverage area. Personal protection water mist systems are not recommended for people who may have the potential to sabotage the system. In questionnaires and interviews, it emerged that there are big differences between how Norwegian municipalities work with assigning, installing, operating, and maintaining personal protection water mist systems. In larger municipalities, there are more people who rely on routines and formal processes for the work, and there is therefore a greater proportion of the larger municipalities who distribute the facilities out to individuals than in the small municipalities where the work is more characterised by informal routines and personal relations. 3 Based on the results from this study, it is our opinion that the following aspects should be covered by future work: • Need for a new and updated cost-benefit analysis for personal protection water mist systems. • Need for a better statistical basis for assessment of the personal protection water mist systems. • Need for a Norwegian test standard for personal protection water mist systems. • Need for clear guidelines for assignment, procurement, installing, operation, and maintenance of personal protection water mist systems.
The Scandinavian countries have in later years seen several severe wildfires and is expected to exhibit more severe fire danger. While direct flame spread has been an important topic in wildfire research, there is a need for development and to ensure that experimental methods are relevant for Scandinavian wildfire characteristics. To ensure relevant lab conditions for fire-resilient material development work, large lab-scale (2×4 meters) experiments were conducted on various fuels. Its fire behaviour (such as rate of spread, fireline intensity and flame length) was compared with ongoing wildfire field studies from ongoing field studies in boreal and hemiboreal Sweden. The lab fire experiments show good potential to mimic relevant natural wildfire conditions in the laboratory once a standard design fire exposure for fire resilient materials is developed.
The late 90s and the early 2000s was a period with relative extensive research and innovation in the area of manual fire extinguishing methods and equipment for the fire service. New equipment such as the cutting extinguisher and extinguishing spears allowed to conduct offensive attacks from the exterior of a building, reducing the exposure of fire fighters to fire and smoke and their associated risks in general. This led to the development of new firefighting tactics, as for example the Quadrant Model of the Dutch fire service, which extends the “traditional” offensive interior attack and defensive exterior attack by the offensive exterior attack and defensive interior attack.Recently the research focus has furthermore increasingly shifted to environmental aspects, such as the water consumption and effect of additives (i.e., foam) on humans and the environment. Extinguishing with smaller amounts of water is beneficial for the environment, reduces water damage and lowers the burden on the water delivery system.ConclusionIn conclusion, the systems most relevant to be further tested in a fire situation in a small house or dwelling are the cutting extinguisher and the extinguishing spear.These systems are different in operation but have both shown to be promising with regard to fulfilling the different objectives of the overall project. Being relatively easy to utilize with the right training during internal extinguishing efforts executed from the outside of the building, and being only water based to minimize contamination, due to lower water consumption, of the surrounding areas give these systems advantages over conventional equipment.Especially if the systems are used in combination with an IR camera to locate the fire, the extinguishing efforts can be started early and effectively, and the water amount needed to control the fire may be reduced. The need for firefighters with breathing apparatus is reduced as well, hence reducing the smoke exposure to firefighters.The fact that the fire service also recognizes the potential of using these systems early in the extinguishing efforts, and is working on implementing them, prompts the need for scientific backup.
The RISE report 2019:04 «Charcoal and wood burning ovens in restaurants – Fire safety and documentation requirements» [1] investigated regulations and documental demands tied to charcoal and wood burning ovens in restaurants in Norway. A part of the conclusion in this report emphasized the need for, through physical testing, mapping whether existing test standards covers the safety requirements of charcoal ovens in restaurants. NS-EN 13240:2001 «Roomheaters fired with solid fuel. Requirements and test methods» [2] was chosen as a relevant test standard. Three test ovens (a closed test oven, a dummy oven and an open test oven) was produced at RISE Fire Research. Their construction with regard to insulation capabilities, materials and dimensions was based on existing charcoal ovens placed on the Norwegian marked. This was done to achieve an objective depiction of the issue, without the need for a specific brand of ovens. Restaurant oven charcoal was utilized to achieve as real heat development as possible in the test ovens. The test layout is based on NS-EN 12340:2001, with a test rigg constructed of two «safety walls», ceiling and floor attached with thermocouples. Temperatures from the test oven are registered in the safety walls at several positions according to a standardised grid, and in the ceiling and the floor each have one single measurement position measuring warmest point. Thermocouples in the chimney and exhaust duct measured the flue gas temperatures transported to the exhaust system. Four different tests were conducted, where the first one was a standardized safety test including the closed oven model. The second test was the same safety test setup with the dummy oven besides the closed oven. The dummy contained a built-in propane burner to simulate the heat load from a real oven. The purpose was to simulate two ovens placed next to each other. The third test was an overload test on the closed test oven with 150 % fuel load and higher refuelling frequency. The last test was a test of the open test oven. The safety test method described in NS-EN 13240:2001 is suitable to test the level of stable maximal temperature in the surrounding combustible materials, in the same way as for roomheaters, which the method is designed for. The method addresses safety aspects such as surface temperatures and handles on the oven. Tests show that the temperatures developed in the ovens have the potential to breech the temperature criterion given by the test standard, and therefore contribute to the ignition of surrounding combustible materials. Such situations pose a fire risk and safety measures regarding this aspect must be documented by the producer. NS-EN13240:2001 does not cover temperatures for exhaust duct and the production of sparks and their possible spread to combustible materials. These are important safety aspects which must be addressed when documenting the fire safety of restaurant grills. Tests show that sparks are created in the oven, including from restaurant charcoal fuel, and are transported into the exhaust duct, and out through the opening of the grill door. Together with high flue gas temperatures in the exhaust duct and deposits of soot and cooking oil this pose a fire risk. Documentation must therefore be presented, showing that the oven is equipped with measures (for instance spark screen) which guards the exhaust duct from sparks to a satisfactory degree. Operators of the oven must receive adequate training and must operate the closed oven with caution, as to avoid incidents with sparks being released though the door. The placement of ovens next to each other does not seem to increase the heat load on surrounding walls but may lead to increased temperatures in between the ovens. The consequences of temperature increases must be documented. Tests show that overloading with fuel and intensifying the refuelling intervals can lead to increased temperatures in the oven, which can affect materials and welding seams. Overloading can also affect the temperatures towards surrounding walls and exhaust ducts and therefore may affect fire safety negatively. NS-EN 13240:2001 requires the producer to documents how the oven is constructed and of what materials, and that the welding seams are dimensioned for the materials used. It is recommended that the producer documents the safety level of the oven materials with an overload test. It must also be documented that the exhaust ducts in which the flue gas are transported are constructed to handle the potential temperatures that can arise, including erroneous use.
This article presents an analysis of a fire in a municipal apartment building used as housing for people with challenges connected to drug addiction. The fire took place in Norway 7th of August 2021. The incident happened during the night and the fire was spreading quickly and intensely via the external wooden balconies. The combination of risk factors both connected to the fire development and the characteristics of the occupants raises the potential for fire fatalities. This analysis seeks to understand why the fire spread with such a speed, and how everyone in the building survived without injuries. The analysis identified both technical and human factors that may help to answer these questions. The findings suggest that there were deficiencies connected to the technical fire safety design that if improved could have reduced the fire damage. Factors promoting the fire spread and fire intensity include the choice of wood material used in the construction of the balconies, no sprinkler system installed on the balconies and a large fire load on the balconies caused by the occupants’ tendency to accumulate possessions on the balconies. Factors contributing to the outcome of no injuries or fatalities included occupants being awake during these late hours, and the strong social network between them. Such a network should be seen as a positive factor regarding robustness against fire and should be encouraged.
This paper introduces a novel approach to designing autonomous gate drivers for soft-switched buck converters. The objective is to reduce switching losses, enhance converter efficiency, and reduce electromagnetic interference (EMI). The uniqueness of this converter is that the pulse-width modulation is performed autonomously on the gate driver. The gate driver makes quick decisions on switching times, capitalizing on the minimal time delay between measurements and switching. In the proposed buck converter configuration, the gate driver senses both the current and voltage across the switches to avoid delay. When a slightly negative voltage is detected across the switch, it rapidly turns on, resulting in a zero-voltage switching (ZVS). With an external snubber capacitor placed across the switches, the turn-off switching losses are zero (ZVS). Hence, both the turn-on and turn-off of the switch are soft. To enable the switch to turn off, a reference value of the switch current needs to be sent out to the gate driver using a galvanically isolated current sensor. Through this approach, the efficiency of the 7 kW buck converter has been calculated to exceed 99% without including the filter losses. Additional benefits include reduced switch stresses, diminished electromagnetic interference (EMI), and simplified thermal management.
Software systems often target a variety of different market segments. Targeting varying customer requirements requires a product-focused development process. Software Product Line (SPL) engineering is one possible approach based on reuse rationale to aid quick delivery of quality product variants at scale. SPLs reuse common features across derived products while still providing varying configuration options. The common features, in most cases, are realized by reusable assets. In practice, the assets are reused in a clone-and-own manner to reduce the upfront cost of systematic reuse. Besides, the assets are implemented in increments, and requirements prioritization also has to be done. In this context, the manual reuse analysis and prioritization process become impractical when the number of derived products grows. Besides, the manual reuse analysis process is time-consuming and heavily dependent on the experience of engineers. In this licentiate thesis, we study requirements-level reuse recommendation and prioritization for SPL assets in industrial settings. We first identify challenges and opportunities in SPLs where reuse is done in a clone-and-own manner. We then focus on one of the identified challenges: requirements-based SPL assets reuse and provide automated support for identifying reuse opportunities for SPL assets based on requirements. Finally, we provide automated support for requirements prioritization in the presence of dependencies resulting from reuse.
Problem: The goal of a software product line is to aid quick andquality delivery of software products, sharing common features.Effectively achieving the above-mentioned goals requires reuseanalysis of the product line features. Existing requirements reuseanalysis approaches are not focused on recommending product linefeatures, that can be reused to realize new customer requirements.Hypothesis: Given that the customer requirements are linked toproduct line features' description satisfying them: then the customer requirements can be clustered based on patterns and similarities, preserving the historic reuse information. New customerrequirements can be evaluated against existing customer requirements and reuse of product line features can be recommended.Contributions: We treated the problem of feature reuse analysisas a text classification problem at the requirements-level. We useNatural Language Processing and clustering to recommend reuseof features based on similarities and historic reuse information.The recommendations can be used to realize new customer requirements. © 2020 Copyright held by the owner/author(s).
[Context and Motivation] Content-based recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a new requirement is proposed by a stakeholder, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn identify previously developed code. [Question/problem] Several NLP approaches for similarity computation are available, and there is little empirical evidence on the adoption of an effective technique in recommender systems specifically oriented to requirements-based code reuse. [Principal ideas/results] This study compares different state-of-the-art NLP approaches and correlates the similarity among requirements with the similarity of their source code. The evaluation is conducted on real-world requirements from two industrial projects in the railway domain. Results show that requirements similarity computed with the traditional tf-idf approach has the highest correlation with the actual software similarity in the considered context. Furthermore, results indicate a moderate positive correlation with Spearman’s rank correlation coefficient of more than 0.5. [Contribution] Our work is among the first ones to explore the relationship between requirements similarity and software similarity. In addition, we also identify a suitable approach for computing requirements similarity that reflects software similarity well in an industrial context. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and categorization.
Recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a stakeholder proposes a new requirement, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn, identify previously developed code. Several NLP approaches for similarity computation between requirements are available. However, there is little empirical evidence on their effectiveness for code retrieval. This study compares different NLP approaches, from lexical ones to semantic, deep-learning techniques, and correlates the similarity among requirements with the similarity of their associated software. The evaluation is conducted on real-world requirements from two industrial projects from a railway company. Specifically, the most similar pairs of requirements across two industrial projects are automatically identified using six language models. Then, the trace links between requirements and software are used to identify the software pairs associated with each requirements pair. The software similarity between pairs is then automatically computed with JPLag. Finally, the correlation between requirements similarity and software similarity is evaluated to see which language model shows the highest correlation and is thus more appropriate for code retrieval. In addition, we perform a focus group with members of the company to collect qualitative data. Results show a moderately positive correlation between requirements similarity and software similarity, with the pre-trained deep learning-based BERT language model with preprocessing outperforming the other models. Practitioners confirm that requirements similarity is generally regarded as a proxy for software similarity. However, they also highlight that additional aspect comes into play when deciding software reuse, e.g., domain/project knowledge, information coming from test cases, and trace links. Our work is among the first ones to explore the relationship between requirements and software similarity from a quantitative and qualitative standpoint. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and change impact analysis.
Processing and reviewing nightly test execution failure logs for large industrial systems is a tedious activity. Furthermore, multiple failures might share one root/common cause during test execution sessions, and the review might therefore require redundant efforts. This paper presents the LogGrouper approach for automated grouping of failure logs to aid root/common cause analysis and for enabling the processing of each log group as a batch. LogGrouper uses state-of-art natural language processing and clustering approaches to achieve meaningful log grouping. The approach is evaluated in an industrial setting in both a qualitative and quantitative manner. Results show that LogGrouper produces good quality groupings in terms of our two evaluation metrics (Silhouette Coefficient and Calinski-Harabasz Index) for clustering quality. The qualitative evaluation shows that experts perceive the groups as useful, and the groups are seen as an initial pointer for root cause analysis and failure assignment.
Requirements prioritization plays an important role in driving project success during software development. Literature reveals that existing requirements prioritization approaches ignore vital factors such as interdependency between requirements. Existing requirements prioritization approaches are also generally time-consuming and involve substantial manual effort. Besides, these approaches show substantial limitations in terms of the number of requirements under consideration. There is some evidence suggesting that models could have a useful role in the analysis of requirements interdependency and their visualization, contributing towards the improvement of the overall requirements prioritization process. However, to date, just a handful of studies are focused on model-based strategies for requirements prioritization, considering only conflict-free functional requirements. This paper uses a meta-model-based approach to help the requirements analyst to model the requirements, stakeholders, and inter-dependencies between requirements. The model instance is then processed by our modified PageRank algorithm to prioritize the given requirements. An experiment was conducted, comparing our modified PageRank algorithm’s efficiency and accuracy with five existing requirements prioritization methods. Besides, we also compared our results with a baseline prioritized list of 104 requirements prepared by 28 graduate students. Our results show that our modified PageRank algorithm was able to prioritize the requirements more effectively and efficiently than the other prioritization methods.
The use of requirements’ information in testing is a well-recognized practice in the software development life cycle. Literature reveals that existing tests prioritization and selection approaches neglected vital factors affecting tests priorities, like interdependencies between requirement specifications. We believe that models may play a positive role in specifying these inter-dependencies and prioritizing tests based on these inter-dependencies. However, till date, few studies can be found that make use of requirements inter-dependencies for test case prioritization. This paper uses a meta-model to aid modeling requirements, their related tests, and inter-dependencies between them. The instance of this meta-model is then processed by our modified PageRank algorithm to prioritize the requirements. The requirement priorities are then propagated to related test cases in the test model and test cases are selected based on coverage of extra-functional properties. We have demonstrated the applicability of our proposed approach on a small example case.
The software system controlling a train is typically deployed on various hardware architectures and must process various signals across those deployments. The increase of such customization scenarios and the needed adherence of the software to various safety standards in different application domains has led to the adoption of product line engineering within the railway domain. This paper explores the current state-of-practice of software product line development within a team developing industrial embedded software for a train propulsion control system. Evidence is collected using a focus group session with several engineers and through inspection of archival data. We report several benefits and challenges experienced during product line adoption and deployment. Furthermore, we identify and discuss improvement opportunities, focusing mainly on product line evolution and test automation.
Categorizing existing test specifications can provide insights on coverage of the test suite to extra-functional properties. Manual approaches for test categorization can be time-consuming and prone to error. In this short paper, we propose a semi-automated approach for semantic keywords-based textual test categorization for extra-functional properties. The approach is the first step towards coverage-based test case selection based on extra-functional properties. We report a preliminary evaluation of industrial data for test categorization for safety aspects. Results show that keyword-based approaches can be used to categorize tests for extra-functional properties and can be improved by considering contextual information of keywords.
This tutorial explores requirements-based reuse recommendation for product line assets in the context of clone-and-own product lines.
Software product lines (SPLs) are based on reuse rationale to aid quick and quality delivery of complex products at scale. Deriving a new product from a product line requires reuse analysis to avoid redundancy and support a high degree of assets reuse. In this paper, we propose and evaluate automated support for recommending SPL assets that can be reused to realize new customer requirements. Using the existing customer requirements as input, the approach applies natural language processing and clustering to generate reuse recommendations for unseen customer requirements in new projects. The approach is evaluated both quantitatively and qualitatively in the railway industry. Results show that our approach can recommend reuse with 74% accuracy and 57.4% exact match. The evaluation further indicates that the recommendations are relevant to engineers and can support the product derivation and feasibility analysis phase of the projects. The results encourage further study on automated reuse analysis on other levels of abstractions.
A series of ferrocenyl substituted hydrazones (I–VII) derived from ferrocene carboxaldehyde and substituted hydrazides have been prepared and characterized by FTIR, 1H NMR spectroscopy, and crystallographic studies. The single-crystal X-ray analysis for III·0.5H2O·0.5CH3CN (CIF file CCDC no. 1968937) further authenticates the structural motif of the synthesized compounds. The C(11) of ferrocene carboxaldehyde is linked with N(1) of the hydrazide moiety with a bond length of 1.283(5) Å, confirming the binding of the two structural units present in the final product. They were preliminarily screened for their antimicrobial activity and demonstrate good results. The free radical scavenging activity for the compounds (III, IV) has been found to be more than 90% when compared with the ascorbic acid. The total antioxidant capacity and total reducing power assays for VI show significant activity whereas the data for the other compounds are also encouraging. Quantum chemical calculations at the DFT level predict that compound II is the softest while VII is the hardest within the series, resultantly II can be used as a synthon for further chemical reactions.
Detection and evaluation of partial discharges (PD) is important for quality assurance and diagnosis ofelectrical insulation systems. With increasing use of DC voltages in electrical transmission and distributionsystems, the field of PD under DC voltage stress needs to be further investigated.CIGRE Working Group D1.63 was approved for start in May 2015, where available knowledge and experiencein particular concerning the field distribution in insulation systems used in DC voltage systems, and thephysical processes of the PD phenomena under DC voltage stress should be reviewed. In order of guidingto meaningful procedures for DC PD measurements of HV equipment thorough understanding of i) thedifferences of PD behaviour between AC and DC with respect to the physical process and ii) influencingfactors of operating conditions (as e.g. polarization, temperature etc.) of different insulation systems underDC stress and respective effects on PD phenomena had to be examined.
The digitization of a supply chain involves satisfying several functional and non-functional context specific requirements. The work presented herein builds on efforts to elicit trust and profit requirements from actors in the Swedish livestock supply chain, specifically the beef supply chain. Interviewees identified several benefits related to data sharing and traceability but also emphasized that these benefits could only be realized if concerns around data security and data privacy were adequately addressed. We developed a data sharing platform as a response to these requirements. Requirements around verifiability, traceability, secure data sharing of potentially large data objects, fine grained access control, and the ability to link together data objects was realized using distributed ledger technology and a distributed file system. This paper presents this data sharing platform together with an evaluation of its usefulness in the context of beef supply chain traceability.
The connection between objects and information exchange has been possible in recent years, with the advent of the Internet of Things (IoT) in different industries. We can meet different requirements in each industry utilizing this feature. Intelligent transportation uses the Internet of Vehicles (IoV) as a solution for communication among vehicles. It improves traffic management applications and services to guarantee safety on roads. We categorize services, applications, and architectures and propose a taxonomy for IoV. Then, we study open issues and challenges for future works. We highlighted applications and services due to drivers' requirements and nonfunctional requirements, considering the qualitative characteristic. This paper summarizes the current state of the IoV in architectures, services, and applications. It can be a start view to provide the solutions for challenges in traffic management in cities. The present study is beneficial for smart city developments and management. According to this paper's result, the services and applications evaluate performance with 34% frequency, safety and data accuracy, and security with a 13% frequency in selected papers. These measurements are essential due to the IoV characteristics such as real-time operation, accident avoidance in applications, and complicated user data management.
Recent advances in artificial intelligence and machine learning (ML) led to effective methods and tools for analyzing the human behavior. Human Activity Recognition (HAR) is one of the fields that has seen an explosive research interest among the ML community due to its wide range of applications. HAR is one of the most helpful technology tools to support the elderly’s daily life and to help people suffering from cognitive disorders, Parkinson’s disease, dementia, etc. It is also very useful in areas such as transportation, robotics and sports. Deep learning (DL) is a branch of ML based on complex Artificial Neural Networks (ANNs) that has demonstrated a high level of accuracy and performance in HAR. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are two types of DL models widely used in the recent years to address the HAR problem. The purpose of this paper is to investigate the effectiveness of their integration in recognizing daily activities, e.g., walking. We analyze four hybrid models that integrate CNNs with four powerful RNNs, i.e., LSTMs, BiLSTMs, GRUs and BiGRUs. The outcomes of our experiments on the PAMAP2 dataset indicate that our proposed hybrid models achieve an outstanding level of performance with respect to several indicative measures, e.g., F-score, accuracy, sensitivity, and specificity. © 2020 by the authors.
Reducing the heterogeneity of technical lignin is essential to obtain predictable and high-performance polymeric materials that are suitable for high-value applications. Organic solvents with different polarities and solubilities can be used to fractionate lignin and reduce the complexity and diversity of its chemical structure. Among the various solvents and solvent mixtures, acetone-water mixtures offer an energy-efficient, cost-effective, and environmentally friendly means of lignin fractionation. In the present study, temperature-induced acetone-water fractionation was investigated to refine the properties of a technical softwood Kraft lignin, i.e., LignoBoost™ lignin. Relatively mild operating conditions were tested, namely, temperatures of 70-110°C and autogenous pressure. A factorial experimental design was developed using the Design-Expert® software, and three factors (temperature, time, and acetone concentration) were investigated. It was found that temperature-induced fractionation could increase lignin homogeneity and maintain high lignin solubilization with a short processing time (<1 h). It was also possible to tune the properties of the soluble lignin fraction (yield and weight-average molecular weight) based on the factorial models developed. The techno-economic evaluation confirmed the commercial viability of this fractionation process.
In this deliverable from the SeCoHeat project, profits that can be made with 1 MWh of electricity production capacity on existing ancillary service markets are evaluated in 2020 and 2021. Profits are evaluated for four different marginal production costs corresponding to the following fuels for a CHP power plant: waste (assumed fuel price: 0 kr/MWh), recycled wood (10 kr/MWh), wood chips (20 kr/MWh) and wood pellets (30 kr/MWh). The results show that except for wood chips and wood pellets in 2020, the most profitable ancillary service markets are FFR (fast-frequency response) and aFRR down (automatic frequency restoration reserves for down-regulation). The reasons are that (1) producers don’t have to withhold capacity from the day-ahead market when their participate in these two markets and (2) producers get compensated for the capacity reserved for the ancillary service markets. For wood chips, the FFR market was the most profitable in 2020, followed by the mFRR down market (manual frequency restoration reserves for down-regulation). The reason for the mFRR down market to be more profitable than the aFRR down market for this fuel is that the profits from mFRR down depend on the avoided fuel costs, which are higher for wood chips than for waste and recycled wood. In 2021, all prices started increasing significantly, which decreased the relative profitability of the mFRR down compared to other markets. For wood pellets, the mFRR down market was also the second most profitable market in 2020, for the same reasons. The most profitable one in 2020 was the mFRR up market (manual frequency restoration reserves for up-regulation). The reason is that the higher fuel price of these two fuels entails low participation in the day-ahead market. Therefore, withholding capacity from the day-ahead market to be able to participate on the mFRR up market brings additional profits. In 2021, however, day-ahead prices started increasing significantly (a trend that continued into 2022) and the mFRR up market became the least profitable market for these two fuels. The profit evaluation performed in this deliverable is purely economic. It does not include the sector coupling to the heat sector (which entails limitation of the available electricity production capacity but also a possibility to store heat if storage is available) nor does it include other technical limitations such as ramp rates. These aspects will be considered in follow-up work in this project. This report has been compiled within the scope of the project SeCoHeat - Sector coupling of district heating with the electricity system: profitability and operation. The project is financed by the Research and Development Foundation of Göteborg Energi.
Recently, there has been an increase in apartments with a large number of inhabitants, i.e., high residential density. This is partly due to a housing shortage in general but also increased migration, particularly in suburbs of major cities. This paper specifies issues that might be caused by high residential density by investigating the technical parameters influenced in Swedish apartments that are likely to have high residential density. Interviews with 11 employees at housing companies were conducted to identify issues that might be caused by high residential density. Furthermore, simulations were conducted based on extreme conditions described in the interviews to determine the impact on the energy use, indoor environmental quality, and moisture loads. In addition, the impact of measures to mitigate the identified issues was determined. Measures such as demand-controlled ventilation, increase of a constant ventilation rate, and moisture buffering are shown to reduce the risk for thermal discomfort, mold growth, and diminished indoor air quality; while still achieving a lower energy use than in a normally occupied apartment. The results of this study can be used by authorities to formulate incentives and/or recommendations for housing owners to implement measures to ensure good indoor environmental quality for all, irrespective of residential density conditions.
During the last few years, there has been an increased number of overcrowded apartments, due to increased migration but also housing shortage in general, particularly in the suburbs to major cities. The question is how the indoor environment in these apartments is affected by the high number of persons and how the problems related to high residential density can be overcome. This paper aims to specify the problem by investigating and analysing the technical parameters influenced by residential density in Swedish apartments built between 1965-1974. To map the situation, 11 interviews with employees at housing companies were conducted. Based on extreme conditions described in the interviews, simulations of the indoor climate and moisture risks at some vulnerable parts of constructions were made. Simulations were focused on moisture loads and CO2 concentrations as functions of residential density and ventilation rate. Finally, measures to combat problems associated to overcrowding are suggested. The aim is that the results should be used by authorities to formulate incentives and/or recommendations for housing companies to take actions to ensure a good indoor environment for all, irrespective of residential density conditions. © The Authors.
This research investigates the potential of a game-theoretic-based Active Yaw Control (AYC) strategy to enhance power generation in wind farms. The proposed AYC strategy in this study replaces traditional look-up tables with a trained Artificial Neural Network (ANN) that determines the optimal yaw misalignment for turbines under time-varying atmospheric conditions. The study examines a hypothetical 3x2 rectangular arrangement of NREL 5-MW wind turbines. The FAST.Farm simulation tool, utilizing the dynamic wake meandering (DWM) model, is employed to assess both the power performance and structural load on the wind turbines. When tested with two different inflow directions and ambient turbulence (10%), the AYC strategy demonstrated a maximum increase in total power output of 2.6%, although it affected individual turbines differently. It also exhibits an increase in some structural loads, such as tower-top torque, while some components experience a slight reduction in load. The results underscore the effectiveness of the ANN-guided game-theoretic algorithm in improving wind farm power generation by mitigating the negative impact of wake interference, offering a scalable and efficient method for optimizing large-scale wind farm. However, it is essential to evaluate the overall impact of AYC on wind farm efficiency in terms of both Annual Energy Production (AEP) and structural loading under various atmospheric conditions.
Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.
The effect of biofilm formation on passive stainless steel in seawater environments is of primary importance since it leads to potential ennoblement of surfaces and subsequently to localized corrosion such as pitting and crevice corrosion. This study aims at developing an ecofriendly alginate biopolymer containing both non-toxic calcium and a limited amount of biocidal zinc ions which inhibits this effect. For this purpose, calcium alginate containing less than 1 % of zinc ions localized in the vicinity of the steel surface in natural and renewed seawater is demonstrated to reduce significantly the ennoblement process of steel. After 1 month of immersion, a mass loss of only 4 % of the active material is observed authorizing thereby long-term protection of steel in real environment.
Lightweight, energy-efficient materials in building construction typically include polymeric and composite foams. However, these materials pose significant fire hazards due to their high combustibility and toxic gas emissions, including carbon monoxide and hydrogen cyanide. This study delves into the latter aspects by comparing hybrid systems based on nanofiber-reinforced silica-based Pickering foams with a synthetic reference (polyurethane foams). The extent and dynamics of fire retardancy and toxic gas evolution were assessed, and the results revealed the benefits of combining the thermal insulation of silica with the structural strength of biobased nanofibers, the latter of which included anionic and phosphorylated cellulose as well as chitin nanofibers. We demonstrate that the nanofiber-reinforced silica-based Pickering foams are thermal insulative and provide both fire safety and energy efficiency. The results set the basis for the practical design of hybrid foams to advance environmental sustainability goals by reducing energy consumption in built environments.
Nanocellulose (NC)-based hybrid coatings and films containing CeO2 and SiO2 nanoparticles (NPs) to impart UV screening and hardness properties, respectively, were prepared by solvent casting. The NC film-forming component (75 wt % of the overall solids) was composed entirely of cellulose nanocrystals (CNCs) or of CNCs combined with cellulose nanofibrils (CNFs). Zeta potential measurements indicated that the four NP types (CNC, CNF, CeO2, and SiO2) were stably dispersed in water and negatively charged at pH values between 6 and 9. The combination of NPs within this pH range ensured uniform formulations and homogeneous coatings and films, which blocked UV light, the extent of which depended on film thickness and CeO2 NP content, while maintaining good transparency in the visible spectrum (∼80%). The addition of a low amount of CNFs (1%) reduced the film hardness, but this effect was compensated by the addition of SiO2 NPs. Chiral nematic self-assembly was observed in the mixed NC film; however, this ordering was disrupted by the addition of the oxide NPs. The roughness of the hybrid coatings was reduced by the inclusion of oxide NPs into the NC matrix perhaps because the spherical oxide NPs were able to pack into the spaces between cellulose fibrils. We envision these hybrid coatings and films in barrier applications, photovoltaics, cosmetic formulations, such as sunscreens, and for the care and maintenance of wood and glass surfaces, or other surfaces that require a smooth, hard, and transparent finish and protection from UV damage.
From a circular economy perspective, one-pot strategies for the isolation of cellulose nanomaterials at a high yield and with multifunctional properties are attractive. Here, the effects of lignin content (bleached vs unbleached softwood kraft pulp) and sulfuric acid concentration on the properties of crystalline lignocellulose isolates and their films are explored. Hydrolysis at 58 wt % sulfuric acid resulted in both cellulose nanocrystals (CNCs) and microcrystalline cellulose at a relatively high yield (>55%), whereas hydrolysis at 64 wt % gave CNCs at a lower yield (<20%). CNCs from 58 wt % hydrolysis were more polydisperse and had a higher average aspect ratio (1.5-2×), a lower surface charge (2×), and a higher shear viscosity (100-1000×). Hydrolysis of unbleached pulp additionally yielded spherical nanoparticles (NPs) that were <50 nm in diameter and identified as lignin by nanoscale Fourier transform infrared spectroscopy and IR imaging. Chiral nematic self-organization was observed in films from CNCs isolated at 64 wt % but not from the more heterogeneous CNC qualities produced at 58 wt %. All films degraded to some extent under simulated sunlight trials, but these effects were less pronounced in lignin-NP-containing films, suggesting a protective feature, but the hemicellulose content and CNC crystallinity may be implicated as well. Finally, heterogeneous CNC compositions obtained at a high yield and with improved resource efficiency are suggested for specific nanocellulose uses, for instance, as thickeners or reinforcing fillers, representing a step toward the development of application-tailored CNC grades. © 2023 The Authors.
The biotechnological applications of cellulose nanocrystals (CNCs) continue to grow due to their sustainable nature, impressive mechanical, rheological, and emulsifying properties, upscaled production capacity, and compatibility with other materials, such as protein and polysaccharides. In this study, hydrogels from CNCs and pectin, a plant cell wall polysaccharide broadly used in food and pharma, were produced by calcium ion-mediated internal ionotropic gelation (IG). In the absence of pectin, a minimum of 4 wt% CNC was needed to produce self-supporting gels by internal IG, whereas the addition of pectin at 0.5 wt% enabled hydrogel formation at CNC contents as low as 0.5 wt%. Experimental data indicate that CNCs and pectin interact to give robust and self-supporting hydrogels at solid contents below 2.5 %. Potential applications of these gels could be as carriers for controlled release, scaffolds for cell growth, or wherever else distinct and porous network morphologies are required.
Wind-induced dynamic excitation is becoming a governing design action determin-ing size and shape of modern Tall Timber Buildings (TTBs). The wind actions generate dynamic loading, causing discomfort or annoyance for occupants due to the perceived horizontal sway – i.e. vibration serviceability failure. Although some TTBs have been instrumented and meas-ured to estimate their key dynamic properties (natural frequencies and damping), no systematic evaluation of dynamic performance pertinent to wind loading has been performed for the new and evolving construction technology used in TTBs. The DynaTTB project, funded by the Forest Value research program, mixes on site measurements on existing buildings excited by heavy shakers, for identification of the structural system, with laboratory identification of building elements mechanical features coupled with numerical modelling of timber structures. The goal is to identify and quantify the causes of vibration energy dissipation in modern TTBs and pro-vide key elements to FE modelers.
The first building, from a list of 8, was modelled and tested at full scale in December 2019. Some results are presented in this paper. Four other buildings will be modelled and tested in spring 2020.
Wind-induced dynamic excitation is a governing design action determining size and shape of modern Tall Timber Buildings (TTBs). The wind actions generate dynamic loading, causing discomfort or annoyance for occupants due to the perceived horizontal sway, i.e. vibration serviceability problem. Although some TTBs have been instrumented and measured to estimate their key dynamic properties (eigenfrequencies, mode shapes and damping), no systematic evaluation of dynamic performance pertinent to wind loading had been performed for the new and evolving construction technologies used in TTBs. The DynaTTB project, funded by the ForestValue research program, mixed on site measurements on existing buildings excited by mass inertia shakers (forced vibration) and/or the wind loads (ambient vibration), for identification of the structural system, with laboratory identification of building elements mechanical features, coupled with numerical modelling of timber structures. The goal is to identify and quantify the causes of vibration energy dissipation in modern TTBs and provide key elements to finite element models. This paper presents an overview of the results of the project and the proposed Guidelines for design of TTBs in relation to their dynamic properties.
Dispersing Multi-Walled Carbon Nanotubes (MWCNTs) into concrete at low (<1 wt% in cement) concentrations may improve concrete performance and properties and provide enhanced functionalities. When MWCNT-enhanced concrete is fragmented during remodelling or demolition, the stiff, fibrous and carcinogenic MWCNTs will, however, also be part of the respirable particulate matter released in the process. Consequently, systematic aerosolizing of crushed MWCNT-enhanced concretes in a controlled environment and measuring the properties of this aerosol can give valuable insights into the characteristics of the emissions such as concentrations, size range and morphology. These properties impact to which extent the emissions can be inhaled as well as where they are expected to deposit in the lung, which is critical to assess whether these materials might constitute a future health risk for construction and demolition workers. In this work, the impact from MWCNTs on aerosol characteristics was assessed for samples of three concrete types with various amounts of MWCNT, using a novel methodology based on the continuous drop method. MWCNT-enhanced concretes were crushed, aerosolized and the emitted particles were characterized with online and offline techniques. For light-weight porous concrete, the addition of MWCNT significantly reduced the respirable mass fraction (RESP) and particle number concentrations (PNC) across all size ranges (7 nm – 20 μm), indicating that MWCNTs dampened the fragmentation process by possibly reinforcing the microstructure of brittle concrete. For normal concrete, the opposite could be seen, where MWCNTs resulted in drastic increases in RESP and PNC, suggesting that the MWCNTs may be acting as defects in the concrete matrix, thus enhancing the fragmentation process. For the high strength concrete, the fragmentation decreased at the lowest MWCNT concentration, but increased again for the highest MWCNT concentration. All tested concrete types emitted <100 nm particles, regardless of CNT content. SEM imaging displayed CNTs protruding from concrete fragments, but no free fibres were detected.
Communication networks are vital for society and network availability is therefore crucial. There is a huge potential in using network telemetry data and machine learning algorithms to proactively detect anomalies and remedy problems before they affect the customers. In practice, however, there are many steps on the way to get there. In this paper we present ongoing development work on efficient data collection pipelines, anomaly detection algorithms and analysis of traffic patterns and predictability.
Sweden has an ambitious plan to fully decarbonise district heating by 2030 and to contribute with negative emissions of greenhouse gases in 2050. The vagaries of the energy market associated with climate, political, and social changes entail cross-sectoral integration that can fulfill these national targets. Fifth-generation district heating and cooling (5GDHC) is a relatively new concept of district energy systems that features a simultaneous supply of heating and cooling using power-to-heat technologies. This paper presents best practices for developing 5GDHC systems in Sweden to reach a consensus view on these systems among all stakeholders. A mixed-method combining best practice and roadmapping workshops has been used to disseminate mixed knowledge and experience from middle agents representing industry professionals and practitioners. Four successful implementations of 5GDHC systems are demonstrated and the important learned lessons are shared. The best practices are outlined for system planning, system modeling and simulation, prevailing business models for energy communities, and system monitoring. A roadmap from the middle agents’ point of view is composed and can be utilised to establish industry standards and common regulatory frameworks. © 2023 The Author(s)
Building open-domain conversational systems (or chatbots) that produce convincing responses is a recognized challenge. Recent state-of-the-art (SoTA) transformer-based models for the generation of natural language dialogue have demonstrated impressive performance in simulating human-like, single-turn conversations in English.This work investigates, by an empirical study, the potential for transfer learning of such models to Swedish language. DialoGPT, an English language pre-trained model, is adapted by training on three different Swedish language conversational datasets obtained from publicly available sources: Reddit, Familjeliv and the GDC. Perplexity score (an automated intrinsic metric) and surveys by human evaluation were used to assess the performances of the fine-tuned models. We also compare the DialoGPT experiments with an attention-mechanism-based seq2seq baseline model, trained on the GDC dataset. The results indicate that the capacity for transfer learning can be exploited with considerable success. Human evaluators asked to score the simulated dialogues judged over 57% of the chatbot responses to be human-like for the model trained on the largest (Swedish) dataset. The work agrees with the hypothesis that deep monolingual models learn some abstractions which generalize across languages. We contribute the codes, datasets and model checkpoints and host the demos on the HuggingFace platform.
Masonry bridges are among the most sustainable structures ever to have been built. The long service time, the resilience to carry larger loads than originally intended, and significantly lower life cycle cost compared to other bridge types suggest that we should consider the design and construction of new masonry bridges, even if their initial cost is greater than that of steel or concrete bridges The aim of this work is to understand the structural behaviour and study the collapse of a single-span masonry hydrostatic shell, that is a shell designed specifically to carry a hydrostatic load. Due to the complexity of the masonry shell interacting with fill, it is necessary to use a combination of computational methods and load tests on physical models in their structural assessment. We perform a load test to failure on a physical model spanning 770 mm made from 3D printed blocks and analyse the model using the Discrete Element Method (DEM) in Dassault Systemes Abaqus. ` The physical model behaved well and predicts that the bridge could be used at full scale. The preliminary results from a computational DEM model are found to be qualitatively good, but greatly overestimate the collapse load of the bridge.
We present the design and evaluation of a 3.5-year embedded sensing deployment at the Mithræum of Circus Maximus, a UNESCO-protected underground archaeological site in Rome (Italy). Unique to our work is the use of energy harvesting through thermal and kinetic energy sources. The extreme scarcity and erratic availability of energy, however, pose great challenges in system software, embedded hardware, and energy management. We tackle them by testing, for the first time in a multi-year deployment, existing solutions in intermittent computing, low-power hardware, and energy harvesting. Through three major design iterations, we find that these solutions operate as isolated silos and lack integration into a complete system, performing suboptimally. In contrast, we demonstrate the efficient performance of a hardware/software co-design featuring accurate energy management and capturing the coupling between energy sources and sensed quantities. Installing a battery-operated system alongside also allows us to perform a comparative study of energy harvesting in a demanding setting. Albeit the latter reduces energy availability and thus lowers the data yield to about 22% of that provided by batteries, our system provides a comparable level of insight into environmental conditions and structural health of the site. Further, unlike existing energy-harvesting deployments that are limited to a few months of operation in the best cases, our system runs with zero maintenance since almost 2 years, including 3 months of site inaccessibility due to a COVID19 lockdown
To ensure traffic safety and proper operation of vehicular networks, safety messages or beacons are periodically broadcasted in Vehicular Adhoc Networks (VANETs) to neighboring nodes and road side units (RSU). Thus, authenticity and integrity of received messages along with the trust in source nodes is crucial and highly required in applications where a failure can result in life-threatening situations. Several digital signature based approaches have been described in literature to achieve the authenticity of these messages. In these schemes, scenarios having high level of vehicle density are handled by RSU where aggregated signature verification is done. However, most of these schemes are centralized and PKI based where our goal is to develop a decentralized dynamic system. Along with authenticity and integrity, trust management plays an important role in VANETs which enables ways for secure and verified communication. A number of trust management models have been proposed but it is still an ongoing matter of interest, similarly authentication which is a vital security service to have during communication is not mostly present in the literature work related to trust management systems. This paper proposes a secure and publicly verifiable communication scheme for VANET which achieves source authentication, message authentication, non repudiation, integrity and public verifiability. All of these are achieved through digital signatures, Hash Message Authentication Code (HMAC) technique and logging mechanism which is aided by blockchain technology.
Manufacturers of automated systems and their components have been allocating an enormous amount of time and effort in R&D activities, which led to the availability of prototypes demonstrating new capabilities as well as the introduction of such systems to the market within different domains. Manufacturers need to make sure that the systems function in the intended way and according to specifications. This is not a trivial task as system complexity rises dramatically the more integrated and interconnected these systems become with the addition of automated functionality and features to them. This effort translates into an overhead on the V&V (verification and validation) process making it time-consuming and costly. In this paper, we present VALU3S, an ECSEL JU (joint undertaking) project that aims to evaluate the state-of-the-art V&V methods and tools, and design a multi-domain framework to create a clear structure around the components and elements needed to conduct the V&V process. The main expected benefit of the framework is to reduce time and cost needed to verify and validate automated systems with respect to safety, cyber-security, and privacy requirements. This is done through identification and classification of evaluation methods, tools, environments and concepts for V&V of automated systems with respect to the mentioned requirements. VALU3S will provide guidelines to the V&V community including engineers and researchers on how the V&V of automated systems could be improved considering the cost, time and effort of conducting V&V processes. To this end, VALU3S brings together a consortium with partners from 10 different countries, amounting to a mix of 25 industrial partners, 6 leading research institutes, and 10 universities to reach the project goal.