Software systems often target a variety of different market segments. Targeting varying customer requirements requires a product-focused development process. Software Product Line (SPL) engineering is one possible approach based on reuse rationale to aid quick delivery of quality product variants at scale. SPLs reuse common features across derived products while still providing varying configuration options. The common features, in most cases, are realized by reusable assets. In practice, the assets are reused in a clone-and-own manner to reduce the upfront cost of systematic reuse. Besides, the assets are implemented in increments, and requirements prioritization also has to be done. In this context, the manual reuse analysis and prioritization process become impractical when the number of derived products grows. Besides, the manual reuse analysis process is time-consuming and heavily dependent on the experience of engineers. In this licentiate thesis, we study requirements-level reuse recommendation and prioritization for SPL assets in industrial settings. We first identify challenges and opportunities in SPLs where reuse is done in a clone-and-own manner. We then focus on one of the identified challenges: requirements-based SPL assets reuse and provide automated support for identifying reuse opportunities for SPL assets based on requirements. Finally, we provide automated support for requirements prioritization in the presence of dependencies resulting from reuse.
Problem: The goal of a software product line is to aid quick andquality delivery of software products, sharing common features.Effectively achieving the above-mentioned goals requires reuseanalysis of the product line features. Existing requirements reuseanalysis approaches are not focused on recommending product linefeatures, that can be reused to realize new customer requirements.Hypothesis: Given that the customer requirements are linked toproduct line features' description satisfying them: then the customer requirements can be clustered based on patterns and similarities, preserving the historic reuse information. New customerrequirements can be evaluated against existing customer requirements and reuse of product line features can be recommended.Contributions: We treated the problem of feature reuse analysisas a text classification problem at the requirements-level. We useNatural Language Processing and clustering to recommend reuseof features based on similarities and historic reuse information.The recommendations can be used to realize new customer requirements. © 2020 Copyright held by the owner/author(s).
[Context and Motivation] Content-based recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a new requirement is proposed by a stakeholder, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn identify previously developed code. [Question/problem] Several NLP approaches for similarity computation are available, and there is little empirical evidence on the adoption of an effective technique in recommender systems specifically oriented to requirements-based code reuse. [Principal ideas/results] This study compares different state-of-the-art NLP approaches and correlates the similarity among requirements with the similarity of their source code. The evaluation is conducted on real-world requirements from two industrial projects in the railway domain. Results show that requirements similarity computed with the traditional tf-idf approach has the highest correlation with the actual software similarity in the considered context. Furthermore, results indicate a moderate positive correlation with Spearman’s rank correlation coefficient of more than 0.5. [Contribution] Our work is among the first ones to explore the relationship between requirements similarity and software similarity. In addition, we also identify a suitable approach for computing requirements similarity that reflects software similarity well in an industrial context. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and categorization.
Recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a stakeholder proposes a new requirement, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn, identify previously developed code. Several NLP approaches for similarity computation between requirements are available. However, there is little empirical evidence on their effectiveness for code retrieval. This study compares different NLP approaches, from lexical ones to semantic, deep-learning techniques, and correlates the similarity among requirements with the similarity of their associated software. The evaluation is conducted on real-world requirements from two industrial projects from a railway company. Specifically, the most similar pairs of requirements across two industrial projects are automatically identified using six language models. Then, the trace links between requirements and software are used to identify the software pairs associated with each requirements pair. The software similarity between pairs is then automatically computed with JPLag. Finally, the correlation between requirements similarity and software similarity is evaluated to see which language model shows the highest correlation and is thus more appropriate for code retrieval. In addition, we perform a focus group with members of the company to collect qualitative data. Results show a moderately positive correlation between requirements similarity and software similarity, with the pre-trained deep learning-based BERT language model with preprocessing outperforming the other models. Practitioners confirm that requirements similarity is generally regarded as a proxy for software similarity. However, they also highlight that additional aspect comes into play when deciding software reuse, e.g., domain/project knowledge, information coming from test cases, and trace links. Our work is among the first ones to explore the relationship between requirements and software similarity from a quantitative and qualitative standpoint. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and change impact analysis.
Processing and reviewing nightly test execution failure logs for large industrial systems is a tedious activity. Furthermore, multiple failures might share one root/common cause during test execution sessions, and the review might therefore require redundant efforts. This paper presents the LogGrouper approach for automated grouping of failure logs to aid root/common cause analysis and for enabling the processing of each log group as a batch. LogGrouper uses state-of-art natural language processing and clustering approaches to achieve meaningful log grouping. The approach is evaluated in an industrial setting in both a qualitative and quantitative manner. Results show that LogGrouper produces good quality groupings in terms of our two evaluation metrics (Silhouette Coefficient and Calinski-Harabasz Index) for clustering quality. The qualitative evaluation shows that experts perceive the groups as useful, and the groups are seen as an initial pointer for root cause analysis and failure assignment.
Requirements prioritization plays an important role in driving project success during software development. Literature reveals that existing requirements prioritization approaches ignore vital factors such as interdependency between requirements. Existing requirements prioritization approaches are also generally time-consuming and involve substantial manual effort. Besides, these approaches show substantial limitations in terms of the number of requirements under consideration. There is some evidence suggesting that models could have a useful role in the analysis of requirements interdependency and their visualization, contributing towards the improvement of the overall requirements prioritization process. However, to date, just a handful of studies are focused on model-based strategies for requirements prioritization, considering only conflict-free functional requirements. This paper uses a meta-model-based approach to help the requirements analyst to model the requirements, stakeholders, and inter-dependencies between requirements. The model instance is then processed by our modified PageRank algorithm to prioritize the given requirements. An experiment was conducted, comparing our modified PageRank algorithm’s efficiency and accuracy with five existing requirements prioritization methods. Besides, we also compared our results with a baseline prioritized list of 104 requirements prepared by 28 graduate students. Our results show that our modified PageRank algorithm was able to prioritize the requirements more effectively and efficiently than the other prioritization methods.
The use of requirements’ information in testing is a well-recognized practice in the software development life cycle. Literature reveals that existing tests prioritization and selection approaches neglected vital factors affecting tests priorities, like interdependencies between requirement specifications. We believe that models may play a positive role in specifying these inter-dependencies and prioritizing tests based on these inter-dependencies. However, till date, few studies can be found that make use of requirements inter-dependencies for test case prioritization. This paper uses a meta-model to aid modeling requirements, their related tests, and inter-dependencies between them. The instance of this meta-model is then processed by our modified PageRank algorithm to prioritize the requirements. The requirement priorities are then propagated to related test cases in the test model and test cases are selected based on coverage of extra-functional properties. We have demonstrated the applicability of our proposed approach on a small example case.
The software system controlling a train is typically deployed on various hardware architectures and must process various signals across those deployments. The increase of such customization scenarios and the needed adherence of the software to various safety standards in different application domains has led to the adoption of product line engineering within the railway domain. This paper explores the current state-of-practice of software product line development within a team developing industrial embedded software for a train propulsion control system. Evidence is collected using a focus group session with several engineers and through inspection of archival data. We report several benefits and challenges experienced during product line adoption and deployment. Furthermore, we identify and discuss improvement opportunities, focusing mainly on product line evolution and test automation.
Categorizing existing test specifications can provide insights on coverage of the test suite to extra-functional properties. Manual approaches for test categorization can be time-consuming and prone to error. In this short paper, we propose a semi-automated approach for semantic keywords-based textual test categorization for extra-functional properties. The approach is the first step towards coverage-based test case selection based on extra-functional properties. We report a preliminary evaluation of industrial data for test categorization for safety aspects. Results show that keyword-based approaches can be used to categorize tests for extra-functional properties and can be improved by considering contextual information of keywords.
This tutorial explores requirements-based reuse recommendation for product line assets in the context of clone-and-own product lines.
Software product lines (SPLs) are based on reuse rationale to aid quick and quality delivery of complex products at scale. Deriving a new product from a product line requires reuse analysis to avoid redundancy and support a high degree of assets reuse. In this paper, we propose and evaluate automated support for recommending SPL assets that can be reused to realize new customer requirements. Using the existing customer requirements as input, the approach applies natural language processing and clustering to generate reuse recommendations for unseen customer requirements in new projects. The approach is evaluated both quantitatively and qualitatively in the railway industry. Results show that our approach can recommend reuse with 74% accuracy and 57.4% exact match. The evaluation further indicates that the recommendations are relevant to engineers and can support the product derivation and feasibility analysis phase of the projects. The results encourage further study on automated reuse analysis on other levels of abstractions.
The digitization of a supply chain involves satisfying several functional and non-functional context specific requirements. The work presented herein builds on efforts to elicit trust and profit requirements from actors in the Swedish livestock supply chain, specifically the beef supply chain. Interviewees identified several benefits related to data sharing and traceability but also emphasized that these benefits could only be realized if concerns around data security and data privacy were adequately addressed. We developed a data sharing platform as a response to these requirements. Requirements around verifiability, traceability, secure data sharing of potentially large data objects, fine grained access control, and the ability to link together data objects was realized using distributed ledger technology and a distributed file system. This paper presents this data sharing platform together with an evaluation of its usefulness in the context of beef supply chain traceability.
The connection between objects and information exchange has been possible in recent years, with the advent of the Internet of Things (IoT) in different industries. We can meet different requirements in each industry utilizing this feature. Intelligent transportation uses the Internet of Vehicles (IoV) as a solution for communication among vehicles. It improves traffic management applications and services to guarantee safety on roads. We categorize services, applications, and architectures and propose a taxonomy for IoV. Then, we study open issues and challenges for future works. We highlighted applications and services due to drivers' requirements and nonfunctional requirements, considering the qualitative characteristic. This paper summarizes the current state of the IoV in architectures, services, and applications. It can be a start view to provide the solutions for challenges in traffic management in cities. The present study is beneficial for smart city developments and management. According to this paper's result, the services and applications evaluate performance with 34% frequency, safety and data accuracy, and security with a 13% frequency in selected papers. These measurements are essential due to the IoV characteristics such as real-time operation, accident avoidance in applications, and complicated user data management.
Communication networks are vital for society and network availability is therefore crucial. There is a huge potential in using network telemetry data and machine learning algorithms to proactively detect anomalies and remedy problems before they affect the customers. In practice, however, there are many steps on the way to get there. In this paper we present ongoing development work on efficient data collection pipelines, anomaly detection algorithms and analysis of traffic patterns and predictability.
To ensure traffic safety and proper operation of vehicular networks, safety messages or beacons are periodically broadcasted in Vehicular Adhoc Networks (VANETs) to neighboring nodes and road side units (RSU). Thus, authenticity and integrity of received messages along with the trust in source nodes is crucial and highly required in applications where a failure can result in life-threatening situations. Several digital signature based approaches have been described in literature to achieve the authenticity of these messages. In these schemes, scenarios having high level of vehicle density are handled by RSU where aggregated signature verification is done. However, most of these schemes are centralized and PKI based where our goal is to develop a decentralized dynamic system. Along with authenticity and integrity, trust management plays an important role in VANETs which enables ways for secure and verified communication. A number of trust management models have been proposed but it is still an ongoing matter of interest, similarly authentication which is a vital security service to have during communication is not mostly present in the literature work related to trust management systems. This paper proposes a secure and publicly verifiable communication scheme for VANET which achieves source authentication, message authentication, non repudiation, integrity and public verifiability. All of these are achieved through digital signatures, Hash Message Authentication Code (HMAC) technique and logging mechanism which is aided by blockchain technology.
Rapporten behandlar digitalisering – att införa ny digital teknik – i förvaltningsverksamheten av broar. Omfattningen är en förstudie med syftet att identifiera behovet av framtida forskning för en långsiktig utveckling av broförvaltningen. En grundläggande ansats var att en digitalisering ska minska behovet av kostsamma underhållsåtgärder men bibehålla en hög säkerhet för våra broar. Projektets mål var att samla information om digitala informationsmodeller som skapas under investeringsskedet, utvärdera överlämningen av digitala modeller till förvaltningsskedet, och värdera den eventuella nyttan med digital informationsinsamling för tillståndsbedömning och underhållsplanering. En viktig del av detta var beskrivningen av dagens förvaltningssystem och hur det skulle kunna utvecklas. Studierna har bedrivits genom en enkätundersökning med respondenter från konsultfirmor aktiva inom broprojektering, intervjuer med tekniska experter och litteratursökningar. Resultatet visar att projekteringen av broar idag huvudsakligen görs genom byggnads-informationsmodellering (BIM). Inriktningen är mot byggskedet där samordning och kommunikation bedöms vara de största nyttorna. Överlämningen till förvaltningen består dock av relationsritningar i formen av enkla ritningsfiler. Trots att Trafikverkets strategi för BIM beskriver att en informationsmodell bör leva kvar under hela brons livslängd, finns det tveksamheter huruvida en modell från projekteringen är lämplig som förvaltningsmodell. Istället lyfts andra metoder fram för att skapa en modell av det byggda utförandet. Till exempel optiska metoder för skanning och fotogrammetri. Förvaltningssystemen bör utvecklas med funktioner för att lagra och tillgängliggöra stora mängder digital information från sensorer maskinella inspektioner. Syftet är att minska osäkerheterna i det byggda utförandet och graden av nedbrytning, för att slutligen skapa ett bättre underlag för beslut om åtgärder. Ett framtida scenario är en digital tvilling som speglar den verkliga konstruktionen och uppdateras kontinuerligt genom sensordata. Gällande hårdvara för mätningar behöver sensorer och system utvecklas med avseende på energiförbrukning, energiskördning och underhållsåtgärder, t.ex. genom kombinationer av utbytbara komponenter med kort livslängd och andra delar med lång livslängd. Fiberoptiska sensorer visar på lovande egenskaper men utveckling behövs för att göra dem mer kostnadseffektiva i relation till konventionella sensorer.
Information-centric networks (ICNs) intrinsically support multipath transfer and thus have been seen as an exciting paradigm for IoT and edge computing, not least in the context of 5G mobile networks. One key to ICN's success in these and other networks that have to support a diverse set of services over a heterogeneous network infrastructure is to schedule traffic over the available network paths efficiently. This paper presents and evaluates ZQTRTT, a multipath scheduling scheme for ICN that load balances bulk traffic over available network paths and schedules latency-sensitive, non-bulk traffic to reduce its transfer delay. A new metric called zero queueing time (ZQT) ratio estimates path load and is used to compute forwarding fractions for load balancing. In particular, the paper shows through a simulation campaign that ZQTRTT can accommodate the demands of both latency-sensitive and-insensitive traffic as well as evenly distribute traffic over available network paths.
Large-scale flood risk assessment is essential in supporting national and global policies, emergency operations and land-use management. The present study proposes a cost-efficient method for the large-scale mapping of direct economic flood damage in data-scarce environments. The proposed framework consists of three main stages: (i) deriving a water depth map through a geomorphic method based on a supervised linear binary classification; (ii) generating an exposure land-use map developed from multi-spectral Landsat 8 satellite images using a machine-learning classification algorithm; and (iii) performing a flood damage assessment using a GIS tool, based on the vulnerability (depth-damage) curves method. The proposed integrated method was applied over the entire country of Romania (including minor order basins) for a 100-year return time at 30-m resolution. The results showed how the description of flood risk may especially benefit from the ability of the proposed cost-efficient model to carry out large-scale analyses in data-scarce environments. This approach may help in performing and updating risk assessments and management, taking into account the temporal and spatial changes in hazard, exposure, and vulnerability.
Den här texten presenterar resultat från tre aktiviteter för kunskapsinsamling om implementering av digitala verktyg inom vård- och omsorg: en litteratursammanställning, en workshopserie i två delar och en enkätundersökning. Den primära målgruppen har varit verksamhetsutvecklare och projektledare i regioner och kommuner. Antalet svarande i enkätstudien var för lågt för att kunna dra några statistiska slutsatser, men resultatet kan, tillsammans med workshopserien, ändå användas för att identifiera områden där det verkar finnas utmaningar. Både litteratursammanställningen och enkätundersökningen pekar på ett behov av att utveckla strukturerade utvärderingsmodeller för implementering. Även under workshoppen diskuterades den bristande förmågan att ”samla in evidens under projektens gång”, och det fanns en önskan om att ett sådant arbetssätt skulle utvecklas. Enkäten och workshoppen pekar även på flera återkommande problem under implementeringsprojekts olika faser. I den inledande fasen efterfrågas bättre analyser och förankringsarbete. Analyser med användarfokus missas speciellt ofta, så som användarresa, hållbarhetsanalys och intressent- och behovsanalys. Även i andra ändan av processen, då system och arbetssätt ska avvecklas, finns utmaningar och förbättringsförslag, t.ex. saknas ofta beslut om utfasning av gamla lösningar, och ett förslag är att man redan när man skriver kontakt med en leverantör ska säkerställa att leverantören hjälper till med migrering vid utfasning. En annan genomgående problematik är osäkerhet, och ibland avsaknad, av roller, ansvar och kommunikation. Detta handlar om att man inte riktigt vet varför man ska göra saker, eller att verksamheten och personerna som ska göra förändringen inte är tillräckligt inblandade. Det kan också handla om att support och förvaltning inte är tillräckligt väl utvecklat, och om att man inte vet hur man kan samarbeta med leverantörer. Här identifieras även förändringsledning som ett viktigt verktyg för att underlätta en god implementering.
We benchmark the performance of segment-level metrics submitted to WMT 2023 using the ACES Challenge Set (Amrhein et al., 2022). The challenge set consists of 36K examples representing challenges from 68 phenomena and covering 146 language pairs. The phenomena range from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. For each metric, we provide a detailed profile of performance over a range of error categories as well as an overall ACES-Score for quick comparison. We also measure the incremental performance of the metrics submitted to both WMT 2023 and 2022. We find that 1) there is no clear winner among the metrics submitted to WMT 2023, and 2) performance change between the 2023 and 2022 versions of the metrics is highly variable. Our recommendations are similar to those from WMT 2022. Metric developers should focus on: building ensembles of metrics from different design families, developing metrics that pay more attention to the source and rely less on surface-level overlap, and carefully determining the influence of multilingual embeddings on MT evaluation.
To quickly identify maritime sites polluted by heavy metal contaminants, reductions in the size of instrumentation have made it possible to bring an X-ray fluorescence (XRF) analyzer into the field and in direct contact with various samples. The choice of source-sample-detector geometry plays an important role in minimizing the Compton scattering noise and achieving a better signal-to-noise ratio (SNR) in XRF measurement conditions, especially for analysis of wet sediments. This paper presents the influence of geometrical factors on a prototype, designed for in situ XRF analysis of mercury (Hg) in wet sediments using a 57Co excitation source and an X-ray spectrometer. The unique XRF penetrometer prototype has been constructed and tested for maritime wet sediment. The influence on detection efficiency and SNR of various geometrical arrangements have been investigated using the combination of Monte Carlo simulations and laboratory experiments. Instrument calibration was performed for Hg analysis by means of prepared wet sediments with the XRF prototype. The presented results show that it is possible to detect Hg by K-shell emission, thus enabling XRF analysis for underwater sediments. Consequently, the XRF prototype has the potential to be applied as an environmental screening tool for analysis of polluted sediments with relatively high concentrations (e.g., >2880 ppm for Hg), which would benefit in situ monitoring of maritime pollution caused by heavy metals. © 2022 The Authors
The Rekovind2 project, financed by the Swedish Energy Agency, focuses on digitizing wind turbine blade streams for reuse and recycling. This is of the utmost importance to enable new, more circular technical solutions that can replace today’s non-sustainable recycling, i.e. landfill and incineration of wind turbine blades. In this report, the work carried out to map the wind turbine blades in service in Sweden is presented. The digital platform intended to make possible the re-use of blades reaching end-of-life is build around key features that will be required for re-use: blade database with all needed informations on the blade (age, damages, material, model, ...), map with blades geolocation, digital tool to help blade processing such as cutting, and information on what can be done with EoL blades.
Artificial Intelligence offers a wide variety of capabilities that can potentially address people's needs and desires in their specific contexts. This pilot study presents a collaborative method using a deck of AI cards tested with 58 production, AI, and information science students, and experts from an accessible media agency. The results suggest that, with the support of the method and AI cards, participants can ideate and reach conceptual AI solutions. Such conceptualisations can contribute to a more inclusive integration of AI solutions in society.
Futures Literacy is the capability to imagine and understand potential futures to prepare ourselves to act and innovate in the present. This pilot study aims to understand how artistic methodologies and speculative design can support the collaborative exploration of futures in the context of work and contribute to developing peoples' capability of futures literacy. Our premise is that technologies such as Artificial Intelligence and the Internet of things can augment people and support their needs at work. To illustrate this process, we have presented a collaborative method that integrates an artistic intervention with speculative design activities. We tested the method in a full-day workshop with seventeen (17) participants from a Swedish academy responsible for enabling learning and competence development at work in the healthcare sector. The results indicate that the artistic intervention, combined with the speculative design activities, can challenge current participants' perspectives and offer them new ways of seeing futures with technologies. These new ways of seeing reveal underlying premises crucial in developing the capability of futures literacy. © The Author(s)
Novel IoT market solutions and research promise IoT modules that do not require programming or electrical setup, yet shop floor personnel need to face problem solving activities to create technical solutions. This paper introduces the Karakuri card deck and presents a case study composed of four workshop sessions in four manufacturing settings, where shop floor personnel tested the cards as a means of ideating and presenting conceptual IoT solutions in the form of diagrams. The results indicate the validity of the proposed conceptual solutions and suggest prototyping as a next step.
This paper presents a collaborative solution developed to enable people without prior Internet of Things (IoT) knowledge to ideate, conceptualise, role-play and prototype potential improvements to their work processes and environments. The solution, called the Karakuri IoT toolkit and method, was tested in two workshops with eight production leaders at a Swedish manufacturing company. Outcomes were analysed from the perspectives of materials interaction and instruments of inquiry. Results indicate the solution can help people conceive and prototype improvement ideas at early design stages.
Communication networks are becoming increasingly important in military operations, with task fields like time-critical targeting, covert special operations, command and control, training, and logistics, all relying heavily on the communication network and its services. On the other hand, commercial communication has dramatically transformed our society and the way we communicate. The newest network mode at present, 5G and beyond (5GB), is characterized by high speed, low latency, high reliability, and high communication density. Although the use of 5GB commercial networks for defense agencies can offer greater flexibility and efficiency, they also face a new challenge that requires high standards of network protection and harsh working conditions and environments. In this paper, we discuss the significance of communication networks in several potential military applications, particularly for warfare, training/drilling, logistics, and special mission-specific stations. We present the communication trends adopted in military applications. Then, we open up various 5GB key performance indexes and their use cases for the military communication systems. We also elaborate on unique challenges of the military communication networks that are unlikely to be resolved via commercial 5GB research. The various 5GB enabling technologies for military communication systems are discussed. Lastly, we present and analyze 5GB new radio for the private military communication under C-band.
New radio in unlicensed spectrum (NR-U) is an evolutionary extension of the existing unlicensed spectrum technologies, which allows New radio (NR) to operate in the shared and unlicensed frequency bands. However, in such bands, NR-U should coexist with other radio access technologies (RATs) in a commonly shared spectrum. As various RATs possess dissimilar physical and link-layer configurations, NR-U should comply with the requirements for harmonious coexistence with them. For this reason, the majority of the existing studies on NR-U are focused on fair coexistence. In contrast, the efforts on attaining efficiency of the spectrum and fairness concurrently have gained comparatively few interests as they exhibit an adverse feature. Motivated by this limitation, we propose an algorithm called Thompson's sampling-based online gradient ascent (TS-OGA), which jointly considers the fairness between NR-U and incumbents and, at the same time, the efficiency via pertinent idle period adjustment of the incumbents in the operating channel. Because NR-U deals with the two conflicting and competing objectives (i.e., fairness and efficiency), we model it as a multi-objective multi-armed bandit problem using the Generalized Gini Index aggregation function (GGAF). In the proposed scheme, TS-OGA, a Thompson's sampling (TS) policy is employed together with the online gradient ascent to address the multi-objective optimization problem. Through simulation results, we show that TS-OGA can significantly enhance overall channel throughput, while maintaining fairness. Further, TS-OGA provides the best performance compared to three different baseline algorithms such as greedy, upper confidence bound, and pure TS.
The neutral host network (NHN) is a new self-contained network envisioned by fifth generation (5G) of cellular networks, which offers wireless connection to its subscribers from a variety of service providers, including both conventional mobile network operators and non-conventional service providers. The NHN infrastructure, which is operated and maintained by a third neutral party, is rented or leased to network operators looking to scale up their network capacities and coverage in a cost-effective way. This paper highlights NHN as an emerging communication technology for private networks and discuss its opportunities and challenges in realizing multi-tenanted space such as factory, hospitals, stadiums, and universities. The paper also investigates the current state of the art in NHN and elaborates on the underlying enabling technologies for the NHN. Lastly, an efficient radio access network (RAN) slicing scheme based on the multi-arm bandit approach has been proposed to allocate radio resources to various slices, which maximizes resource utilization while guaranteeing the availability of resources to meet the capacity needs of each multi-tenanted operator. The simulation results show that the proposed Thompson's sampling (TS)-based approach performs best in finding the optimal RAN slice for all the operators.
In this paper theoretical benchmarking of semi-vertical and vertical gallium nitride (GaN) MOSFETs with rated voltage of 1.2 kV to 3.3 kV is performed against corresponding silicon carbide (SiC) devices. Specific design features and technology requirements for realization of high voltage vertical GaN MOSFETs are discussed and implemented in simulated structures. The main findings are that a) specific on-resistance of vertical GaN devices is expected to be 75% and 40% of that for 1.2 kV and 3.3 kV SiC MOSFETs, respectively, b) semi-vertical GaN do not offer any advantage over SiC MOSFETs for medium and high voltage devices (>1.0 kV), and c) vertical GaN has largest potential advantage for high and ultra-high voltage devices (>2.0 kV).
Platooning is an application where a group of vehicles move one after each other in close proximity, acting jointly as a single physical system. The scope of platooning is to improve safety, reduce fuel consumption, and increase road use efficiency. Even if conceived several decades ago as a concept, based on the new progress in automation and vehicular networking platooning has attracted particular attention in the latest years and is expected to become of common implementation in the next future, at least for trucks. The platoon system is the result of a combination of multiple disciplines, from transportation, to automation, to electronics, to telecommunications. In this survey, we consider the platooning, and more specifically the platooning of trucks, from the point of view of wireless communications. Wireless communications are indeed a key element, since they allow the information to propagate within the convoy with an almost negligible delay and really making all vehicles acting as one. Scope of this paper is to present a comprehensive survey on connected vehicles for the platooning application, starting with an overview of the projects that are driving the development of this technology, followed by a brief overview of the current and upcoming vehicular networking architecture and standards, by a review of the main open issues related to wireless communications applied to platooning, and a discussion of security threats and privacy concerns. The survey will conclude with a discussion of the main areas that we consider still open and that can drive future research directions. © 2022 The Author(s)
Vehicular communications have grown in interest over the years and are nowadays recognized as a pillar for the Intelligent Transportation Systems (ITSs) in order to ensure an efficient management of the road traffic and to achieve a reduction in the number of traffic accidents. To support the safety applications, both the ETSI ITS-G5 and IEEE 1609 standard families require each vehicle to deliver periodic awareness messages throughout the neighborhood. As the vehicles density grows, the scenario dynamics may require a high message exchange that can easily lead to a radio channel congestion issue and then to a degradation on safety critical services. ETSI has defined a Decentralized Congestion Control (DCC) mechanism to mitigate the channel congestion acting on the transmission parameters (i.e., message rate, transmit power and data-rate) with performances that vary according to the specific algorithm. In this paper, a review of the DCC standardization activities is proposed as well as an analysis of the existing methods and algorithms for the congestion mitigation. Also, some applied machine learning techniques for DCC are addressed.
DAIS is a step forward in the area of artificial intelligence and edge computing. DAIS intends to create a complete framework for self-organizing, energy efficient and private-by-design distributed AI. DAIS is a European project with a consortium of 47 partners from 11 countries coordinated by RISE Research Institute of Sweden.
Engineering large-scale industrial systems mandate an effective Requirements Engineering (RE) process. Such systems necessitate RE process optimization to align with standards, infrastructure specifications, and customer expectations. Recently, artificial intelligence (AI) based solutions have been proposed, aiming to enhance the efficiency of requirements management within the RE process. Despite their advanced capabilities, generic AI solutions exhibit limited adaptability within real-world contexts, mainly because of the complexity and specificity inherent to industrial domains. This limitation notably leads to the continued prevalence of manual practices that not only cause the RE process to be heavily dependent on practitioners’ experience, making it prone to errors, but also often contributes to project delays and inefficient resource utilization. To address these challenges, this Ph.D. dissertation focuses on two primary directions: i) conduct a comprehensive focus group study with a large-scale industry to determine the requirements evolution process and their inherent challenges and ii) propose AI solutions tailored for industrial case studies to automate and streamline their RE process and optimize the development of largescale systems. We anticipate that our research will significantly contribute to the RE domain by providing empirically validated insights in the industrial context.
Allocation of requirements to different teams is a typical preliminary task in large-scale system development projects. This critical activity is often performed manually and can benefit from automated requirements classification techniques. To date, limited evidence is available about the effectiveness of existing machine learning (ML) approaches for requirements classification in industrial cases. This paper aims to fill this gap by evaluating state-of-the-art language models and ML algorithms for classification in the railway industry. Since the interpretation of the results of ML systems is particularly relevant in the studied context, we also provide an information augmentation approach to complement the output of the ML-based classification. Our results show that the BERT uncased language model with the softmax classifier can allocate the requirements to different teams with a 76% F1 score when considering requirements allocation to the most frequent teams. Information augmentation provides potentially useful indications in 76% of the cases. The results confirm that currently available techniques can be applied to real-world cases, thus enabling the first step for technology transfer of automated requirements classification. The study can be useful to practitioners operating in requirements-centered contexts such as railways, where accurate requirements classification becomes crucial for better allocation of requirements to various teams.
Requirements in tender documents are often mixed with other supporting information. Identifying requirements in large tender documents could aid the bidding process and help estimate the risk associated with the project. Manual identification of requirements in large documents is a resource-intensive activity that is prone to human error and limits scalability. This study compares various state-of-the-art approaches for requirements identification in an industrial context. For generalizability, we also present an evaluation on a real-world public dataset. We formulate the requirement identification problem as a binary text classification problem. Various state-of-the-art classifiers based on traditional machine learning, deep learning, and few-shot learning are evaluated for requirements identification based on accuracy, precision, recall, and F1 score. Results from the evaluation show that the transformer-based BERT classifier performs the best, with an average F1 score of 0.82 and 0.87 on industrial and public datasets, respectively. Our results also confirm that few-shot classifiers can achieve comparable results with an average F1 score of 0.76 on significantly lower samples, i.e., only 20% of the data. There is little empirical evidence on the use of large language models and few-shots classifiers for requirements identification. This paper fills this gap by presenting an industrial empirical evaluation of the state-of-the-art approaches for requirements identification in large tender documents. We also provide a running tool and a replication package for further experimentation to support future research in this area. © 2023, The Author(s)
Many organizations developing software-intensive systems face challenges with high product complexity and large numbers of variants. In order to effectively maintain and develop these product variants, Product-Line Engineering methods are often considered, while Model-based Systems Engineering practices are commonly utilized to tackle product complexity. In this paper, we report on an industrial case study concerning the ongoing adoption of Product Line Engineering in the Model-based Systems Engineering environment at Volvo Construction Equipment (Volvo CE) in Sweden. In the study, we identify and define a Product Line Engineering process that is aligned with Model-based Systems Engineering activities at the engines control department of Volvo CE. Furthermore, we discuss the implications of the migration from the current development process to a Model-based Product Line Engineering-oriented process. This process, and its implications, are derived by conducting and analyzing interviews with Volvo CE employees, inspecting artifacts and documents, and by means of participant observation. Based on the results of a first system model iteration, we were able to document how Model-based Systems Engineering and variability modeling will affect development activities, work products and stakeholders of the work products.
This report focus on the intersection ofForeign Information Manipulation andInterference and Large Language Models.The aim is to give a non-technicalcomprehensive understanding of howweaknesses in the language models canbe used for creating malicious content tobe used in FIMI.
Future Unmanned Aerial Vehicles (UAVs) are projected to fly and operate in swarms. The swarm metaphor makes explicit and implicit mappings regarding system architecture and human interaction to aspects of natural systems, such as bee societies. Compared to the metaphor of a team, swarming agents as individuals are less capable, more expendable, and more limited in terms of communication and coordination. Given their different features and limitations, the two metaphors could be useful in different scenarios. We also discuss a choir metaphor and illustrate how it can give rise to different design concepts. We conclude that designers and engineers should be mindful of the metaphors they use because they influence—and limit—how to think about and design for multi-UAV systems. © 2021, The Author(s)
Swarms of autonomous and coordinating Unmanned Aerial Vehicles (UAVs) are rapidly being developed to enable simultaneous control of multiple UAVs. In the field of Human-Swarm Interaction (HSI), researchers develop and study swarm algorithms and various means of control and evaluate their cognitive and task performance. There is, however, a lack of research describing how UAV swarms will fit into future real-world domain contexts. To remedy this, this paper describes a case study conducted within the community of firefighters, more precisely two Swedish fire departments that regularly deploy UAVs in fire responses. Based on an initial description of how their UAVs are used in a forest firefighting context, participating UAV operators and unit commanders envisioned a scenario that showed how the swarm and its capabilities could be utilized given the constraints and requirements of a forest firefighting mission. Based on this swarm scenario description we developed a swarm interaction model that describes how the operators' interaction traverses multiple levels ranging from the entire swarm, via subswarms and individual UAVs, to specific sensors and equipment carried by the UAVs. The results suggest that human-in-the-loop simulation studies need to enable interaction across multiple swarm levels as this interaction may exert additional cognitive strain on the human operator.
The introduction of artificial intelligence (AI) tools in aviation necessitates more research into human-autonomy teaming in these domain settings. This paper describes the development of a design framework for supporting Human Factors novices in considering human factors, improving human-autonomy collaboration, and maintaining safety when developing AI tools for aviation settings. Combining elements of Hierarchical Task Analysis, Coactive Design, and Types and Levels of Autonomy, the design framework provides guidance in three phases: modelling and understanding the existing system and associated tasks; producing a new function allocation for optimal Human-Autonomy Teaming (HAT); and assessing HAT-related risks of the proposed design. In this framework, designers generate a comprehensive set of design considerations to support subsequent development processes. Framework limitations and future research avenues are discussed.
This case study presents how the mixing of speculative design with artistic methodology can contribute to the inquiry oftechnological potentialities in the future of work. The goal and belief are that technologies such as artificial intelligence canaugment employee creativity and support their well-being at work. The co-design process followed an artistic approach andconsisted of three cycles of labs, workshops and events during the span of one year to support professionals with nontechnicalbackground in the ideation and conceptualization of possible futures. The artistic approach consisted of differentexploration perspectives of technology through the use of embodiment, artifacts and creation of speculative fictions. Theresearch team that facilitated the labs was interdisciplinary and the participants were assembled from different partnerorganizations from industry and public sector. We share the learnings from this study attending to three different perspectives:our learnings from the facilitation of the artistic approach, our learnings from the future of work ideas and concepts developedby participants, and discussion of what these learnings can mean to design practitioners and the research community. Resultsindicate that embodiment and speculative fiction can create engagement among professionals that lack technical expertiseand support them in collaborative exploration of alternative futures of work with novel and abstract technologies such as AI.
There is a growing consensus around the transformative and innovative power of Artificial Intelligence (AI) technology. AI will transform which products are launched and how new business models will be developed to support them. Despite this, little research exists today that systematically explores how AI will change and support various aspects of innovation management. To address this question, this article proposes a holistic, multi-dimensional AI maturity model that describes the essential conditions and capabilities necessary to integrate AI into current systems, and guides organisations on their journey to AI maturity. It explores how various elements of the innovation management system can be enabled by AI at different maturity stages. Two key experimentation stages are identified, 1) an initial stage that focuses on optimisation and incremental innovation, and 2) a higher maturity stage where AI becomes an enabler of radical innovation. We conclude that AI technologies can be applied to democratise and distribute innovation across organisations.
Due to increasing market diversification and customer demand, more and more software-based products and services are customizable or are designed in the form of many different variants. This brings about new challenges for the software quality assurance processes: How shall the variability be modelled in order to make sure that all features are being tested? Is it better to test selected variants on a concrete level, or can the generic software and baseline be tested abstractly? Can knowledge-based AI techniques be used to identify and prioritize test cases? How can the quality of a generic test suite be assessed? What are appropriate coverage criteria for configurable modules? If it is impossible to test all possible variants, which products and test cases should be selected for test execution? Can security-testing methods be leveraged to an abstract level?
Petri nets are a common method for modeling and simulation of systems biology application cases. Usually different Petri net concepts (e.g. discrete, hybrid, functional) are demanded depending on the purpose of the application cases. Modeling complex application cases requires a unification of those concepts, e.g. hybrid functional Petri nets (HFPN) and extended hybrid Petri nets (xHPN). Existing tools have certain limitations which motivated the extension of VANESA, an existing open-source editor for biological networks. The extension can be used to model, simulate, and visualize Petri nets based on the xHPN formalism. Moreover, it comprises additional functionality to support and help the user. Complex (kinetic) functions are syntactically analyzed and mathematically rendered. Based on syntax and given physical unit information, modeling errors are revealed. The numerical simulation is seamlessly integrated and executed in the background by the open-source simulation environment OpenModelica utilizing the Modelica library PNlib. Visualization of simulation results for places, transitions, and arcs are useful to investigate and understand the model and its dynamic behavior. The impact of single parameters can be revealed by comparing multiple simulation results. Simulation results, charts, and entire specification of the Petri net model as Latex file can be exported. All these features are shown in the demonstration case. The utilized Petri net formalism xHPN is fully specified and implemented in PNlib. This assures transparency, reliability, and comprehensible simulation results. Thus, the combination of VANESA and OpenModelica shape a unique open-source Petri net environment focusing on systems biology application cases. VANESA is available at: http://agbi.techfak.uni-bielefeld.de/vanesa. © 2021 The Authors