This paper introduces a novel approach to designing autonomous gate drivers for soft-switched buck converters. The objective is to reduce switching losses, enhance converter efficiency, and reduce electromagnetic interference (EMI). The uniqueness of this converter is that the pulse-width modulation is performed autonomously on the gate driver. The gate driver makes quick decisions on switching times, capitalizing on the minimal time delay between measurements and switching. In the proposed buck converter configuration, the gate driver senses both the current and voltage across the switches to avoid delay. When a slightly negative voltage is detected across the switch, it rapidly turns on, resulting in a zero-voltage switching (ZVS). With an external snubber capacitor placed across the switches, the turn-off switching losses are zero (ZVS). Hence, both the turn-on and turn-off of the switch are soft. To enable the switch to turn off, a reference value of the switch current needs to be sent out to the gate driver using a galvanically isolated current sensor. Through this approach, the efficiency of the 7 kW buck converter has been calculated to exceed 99% without including the filter losses. Additional benefits include reduced switch stresses, diminished electromagnetic interference (EMI), and simplified thermal management.
Software systems often target a variety of different market segments. Targeting varying customer requirements requires a product-focused development process. Software Product Line (SPL) engineering is one possible approach based on reuse rationale to aid quick delivery of quality product variants at scale. SPLs reuse common features across derived products while still providing varying configuration options. The common features, in most cases, are realized by reusable assets. In practice, the assets are reused in a clone-and-own manner to reduce the upfront cost of systematic reuse. Besides, the assets are implemented in increments, and requirements prioritization also has to be done. In this context, the manual reuse analysis and prioritization process become impractical when the number of derived products grows. Besides, the manual reuse analysis process is time-consuming and heavily dependent on the experience of engineers. In this licentiate thesis, we study requirements-level reuse recommendation and prioritization for SPL assets in industrial settings. We first identify challenges and opportunities in SPLs where reuse is done in a clone-and-own manner. We then focus on one of the identified challenges: requirements-based SPL assets reuse and provide automated support for identifying reuse opportunities for SPL assets based on requirements. Finally, we provide automated support for requirements prioritization in the presence of dependencies resulting from reuse.
Problem: The goal of a software product line is to aid quick andquality delivery of software products, sharing common features.Effectively achieving the above-mentioned goals requires reuseanalysis of the product line features. Existing requirements reuseanalysis approaches are not focused on recommending product linefeatures, that can be reused to realize new customer requirements.Hypothesis: Given that the customer requirements are linked toproduct line features' description satisfying them: then the customer requirements can be clustered based on patterns and similarities, preserving the historic reuse information. New customerrequirements can be evaluated against existing customer requirements and reuse of product line features can be recommended.Contributions: We treated the problem of feature reuse analysisas a text classification problem at the requirements-level. We useNatural Language Processing and clustering to recommend reuseof features based on similarities and historic reuse information.The recommendations can be used to realize new customer requirements. © 2020 Copyright held by the owner/author(s).
[Context and Motivation] Content-based recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a new requirement is proposed by a stakeholder, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn identify previously developed code. [Question/problem] Several NLP approaches for similarity computation are available, and there is little empirical evidence on the adoption of an effective technique in recommender systems specifically oriented to requirements-based code reuse. [Principal ideas/results] This study compares different state-of-the-art NLP approaches and correlates the similarity among requirements with the similarity of their source code. The evaluation is conducted on real-world requirements from two industrial projects in the railway domain. Results show that requirements similarity computed with the traditional tf-idf approach has the highest correlation with the actual software similarity in the considered context. Furthermore, results indicate a moderate positive correlation with Spearman’s rank correlation coefficient of more than 0.5. [Contribution] Our work is among the first ones to explore the relationship between requirements similarity and software similarity. In addition, we also identify a suitable approach for computing requirements similarity that reflects software similarity well in an industrial context. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and categorization.
Recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a stakeholder proposes a new requirement, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn, identify previously developed code. Several NLP approaches for similarity computation between requirements are available. However, there is little empirical evidence on their effectiveness for code retrieval. This study compares different NLP approaches, from lexical ones to semantic, deep-learning techniques, and correlates the similarity among requirements with the similarity of their associated software. The evaluation is conducted on real-world requirements from two industrial projects from a railway company. Specifically, the most similar pairs of requirements across two industrial projects are automatically identified using six language models. Then, the trace links between requirements and software are used to identify the software pairs associated with each requirements pair. The software similarity between pairs is then automatically computed with JPLag. Finally, the correlation between requirements similarity and software similarity is evaluated to see which language model shows the highest correlation and is thus more appropriate for code retrieval. In addition, we perform a focus group with members of the company to collect qualitative data. Results show a moderately positive correlation between requirements similarity and software similarity, with the pre-trained deep learning-based BERT language model with preprocessing outperforming the other models. Practitioners confirm that requirements similarity is generally regarded as a proxy for software similarity. However, they also highlight that additional aspect comes into play when deciding software reuse, e.g., domain/project knowledge, information coming from test cases, and trace links. Our work is among the first ones to explore the relationship between requirements and software similarity from a quantitative and qualitative standpoint. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and change impact analysis.
Processing and reviewing nightly test execution failure logs for large industrial systems is a tedious activity. Furthermore, multiple failures might share one root/common cause during test execution sessions, and the review might therefore require redundant efforts. This paper presents the LogGrouper approach for automated grouping of failure logs to aid root/common cause analysis and for enabling the processing of each log group as a batch. LogGrouper uses state-of-art natural language processing and clustering approaches to achieve meaningful log grouping. The approach is evaluated in an industrial setting in both a qualitative and quantitative manner. Results show that LogGrouper produces good quality groupings in terms of our two evaluation metrics (Silhouette Coefficient and Calinski-Harabasz Index) for clustering quality. The qualitative evaluation shows that experts perceive the groups as useful, and the groups are seen as an initial pointer for root cause analysis and failure assignment.
Requirements prioritization plays an important role in driving project success during software development. Literature reveals that existing requirements prioritization approaches ignore vital factors such as interdependency between requirements. Existing requirements prioritization approaches are also generally time-consuming and involve substantial manual effort. Besides, these approaches show substantial limitations in terms of the number of requirements under consideration. There is some evidence suggesting that models could have a useful role in the analysis of requirements interdependency and their visualization, contributing towards the improvement of the overall requirements prioritization process. However, to date, just a handful of studies are focused on model-based strategies for requirements prioritization, considering only conflict-free functional requirements. This paper uses a meta-model-based approach to help the requirements analyst to model the requirements, stakeholders, and inter-dependencies between requirements. The model instance is then processed by our modified PageRank algorithm to prioritize the given requirements. An experiment was conducted, comparing our modified PageRank algorithm’s efficiency and accuracy with five existing requirements prioritization methods. Besides, we also compared our results with a baseline prioritized list of 104 requirements prepared by 28 graduate students. Our results show that our modified PageRank algorithm was able to prioritize the requirements more effectively and efficiently than the other prioritization methods.
The use of requirements’ information in testing is a well-recognized practice in the software development life cycle. Literature reveals that existing tests prioritization and selection approaches neglected vital factors affecting tests priorities, like interdependencies between requirement specifications. We believe that models may play a positive role in specifying these inter-dependencies and prioritizing tests based on these inter-dependencies. However, till date, few studies can be found that make use of requirements inter-dependencies for test case prioritization. This paper uses a meta-model to aid modeling requirements, their related tests, and inter-dependencies between them. The instance of this meta-model is then processed by our modified PageRank algorithm to prioritize the requirements. The requirement priorities are then propagated to related test cases in the test model and test cases are selected based on coverage of extra-functional properties. We have demonstrated the applicability of our proposed approach on a small example case.
The software system controlling a train is typically deployed on various hardware architectures and must process various signals across those deployments. The increase of such customization scenarios and the needed adherence of the software to various safety standards in different application domains has led to the adoption of product line engineering within the railway domain. This paper explores the current state-of-practice of software product line development within a team developing industrial embedded software for a train propulsion control system. Evidence is collected using a focus group session with several engineers and through inspection of archival data. We report several benefits and challenges experienced during product line adoption and deployment. Furthermore, we identify and discuss improvement opportunities, focusing mainly on product line evolution and test automation.
Categorizing existing test specifications can provide insights on coverage of the test suite to extra-functional properties. Manual approaches for test categorization can be time-consuming and prone to error. In this short paper, we propose a semi-automated approach for semantic keywords-based textual test categorization for extra-functional properties. The approach is the first step towards coverage-based test case selection based on extra-functional properties. We report a preliminary evaluation of industrial data for test categorization for safety aspects. Results show that keyword-based approaches can be used to categorize tests for extra-functional properties and can be improved by considering contextual information of keywords.
This tutorial explores requirements-based reuse recommendation for product line assets in the context of clone-and-own product lines.
Software product lines (SPLs) are based on reuse rationale to aid quick and quality delivery of complex products at scale. Deriving a new product from a product line requires reuse analysis to avoid redundancy and support a high degree of assets reuse. In this paper, we propose and evaluate automated support for recommending SPL assets that can be reused to realize new customer requirements. Using the existing customer requirements as input, the approach applies natural language processing and clustering to generate reuse recommendations for unseen customer requirements in new projects. The approach is evaluated both quantitatively and qualitatively in the railway industry. Results show that our approach can recommend reuse with 74% accuracy and 57.4% exact match. The evaluation further indicates that the recommendations are relevant to engineers and can support the product derivation and feasibility analysis phase of the projects. The results encourage further study on automated reuse analysis on other levels of abstractions.
The digitization of a supply chain involves satisfying several functional and non-functional context specific requirements. The work presented herein builds on efforts to elicit trust and profit requirements from actors in the Swedish livestock supply chain, specifically the beef supply chain. Interviewees identified several benefits related to data sharing and traceability but also emphasized that these benefits could only be realized if concerns around data security and data privacy were adequately addressed. We developed a data sharing platform as a response to these requirements. Requirements around verifiability, traceability, secure data sharing of potentially large data objects, fine grained access control, and the ability to link together data objects was realized using distributed ledger technology and a distributed file system. This paper presents this data sharing platform together with an evaluation of its usefulness in the context of beef supply chain traceability.
The connection between objects and information exchange has been possible in recent years, with the advent of the Internet of Things (IoT) in different industries. We can meet different requirements in each industry utilizing this feature. Intelligent transportation uses the Internet of Vehicles (IoV) as a solution for communication among vehicles. It improves traffic management applications and services to guarantee safety on roads. We categorize services, applications, and architectures and propose a taxonomy for IoV. Then, we study open issues and challenges for future works. We highlighted applications and services due to drivers' requirements and nonfunctional requirements, considering the qualitative characteristic. This paper summarizes the current state of the IoV in architectures, services, and applications. It can be a start view to provide the solutions for challenges in traffic management in cities. The present study is beneficial for smart city developments and management. According to this paper's result, the services and applications evaluate performance with 34% frequency, safety and data accuracy, and security with a 13% frequency in selected papers. These measurements are essential due to the IoV characteristics such as real-time operation, accident avoidance in applications, and complicated user data management.
Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.
Communication networks are vital for society and network availability is therefore crucial. There is a huge potential in using network telemetry data and machine learning algorithms to proactively detect anomalies and remedy problems before they affect the customers. In practice, however, there are many steps on the way to get there. In this paper we present ongoing development work on efficient data collection pipelines, anomaly detection algorithms and analysis of traffic patterns and predictability.
Building open-domain conversational systems (or chatbots) that produce convincing responses is a recognized challenge. Recent state-of-the-art (SoTA) transformer-based models for the generation of natural language dialogue have demonstrated impressive performance in simulating human-like, single-turn conversations in English.This work investigates, by an empirical study, the potential for transfer learning of such models to Swedish language. DialoGPT, an English language pre-trained model, is adapted by training on three different Swedish language conversational datasets obtained from publicly available sources: Reddit, Familjeliv and the GDC. Perplexity score (an automated intrinsic metric) and surveys by human evaluation were used to assess the performances of the fine-tuned models. We also compare the DialoGPT experiments with an attention-mechanism-based seq2seq baseline model, trained on the GDC dataset. The results indicate that the capacity for transfer learning can be exploited with considerable success. Human evaluators asked to score the simulated dialogues judged over 57% of the chatbot responses to be human-like for the model trained on the largest (Swedish) dataset. The work agrees with the hypothesis that deep monolingual models learn some abstractions which generalize across languages. We contribute the codes, datasets and model checkpoints and host the demos on the HuggingFace platform.
We present the design and evaluation of a 3.5-year embedded sensing deployment at the Mithræum of Circus Maximus, a UNESCO-protected underground archaeological site in Rome (Italy). Unique to our work is the use of energy harvesting through thermal and kinetic energy sources. The extreme scarcity and erratic availability of energy, however, pose great challenges in system software, embedded hardware, and energy management. We tackle them by testing, for the first time in a multi-year deployment, existing solutions in intermittent computing, low-power hardware, and energy harvesting. Through three major design iterations, we find that these solutions operate as isolated silos and lack integration into a complete system, performing suboptimally. In contrast, we demonstrate the efficient performance of a hardware/software co-design featuring accurate energy management and capturing the coupling between energy sources and sensed quantities. Installing a battery-operated system alongside also allows us to perform a comparative study of energy harvesting in a demanding setting. Albeit the latter reduces energy availability and thus lowers the data yield to about 22% of that provided by batteries, our system provides a comparable level of insight into environmental conditions and structural health of the site. Further, unlike existing energy-harvesting deployments that are limited to a few months of operation in the best cases, our system runs with zero maintenance since almost 2 years, including 3 months of site inaccessibility due to a COVID19 lockdown
To ensure traffic safety and proper operation of vehicular networks, safety messages or beacons are periodically broadcasted in Vehicular Adhoc Networks (VANETs) to neighboring nodes and road side units (RSU). Thus, authenticity and integrity of received messages along with the trust in source nodes is crucial and highly required in applications where a failure can result in life-threatening situations. Several digital signature based approaches have been described in literature to achieve the authenticity of these messages. In these schemes, scenarios having high level of vehicle density are handled by RSU where aggregated signature verification is done. However, most of these schemes are centralized and PKI based where our goal is to develop a decentralized dynamic system. Along with authenticity and integrity, trust management plays an important role in VANETs which enables ways for secure and verified communication. A number of trust management models have been proposed but it is still an ongoing matter of interest, similarly authentication which is a vital security service to have during communication is not mostly present in the literature work related to trust management systems. This paper proposes a secure and publicly verifiable communication scheme for VANET which achieves source authentication, message authentication, non repudiation, integrity and public verifiability. All of these are achieved through digital signatures, Hash Message Authentication Code (HMAC) technique and logging mechanism which is aided by blockchain technology.
Rapporten behandlar digitalisering – att införa ny digital teknik – i förvaltningsverksamheten av broar. Omfattningen är en förstudie med syftet att identifiera behovet av framtida forskning för en långsiktig utveckling av broförvaltningen. En grundläggande ansats var att en digitalisering ska minska behovet av kostsamma underhållsåtgärder men bibehålla en hög säkerhet för våra broar. Projektets mål var att samla information om digitala informationsmodeller som skapas under investeringsskedet, utvärdera överlämningen av digitala modeller till förvaltningsskedet, och värdera den eventuella nyttan med digital informationsinsamling för tillståndsbedömning och underhållsplanering. En viktig del av detta var beskrivningen av dagens förvaltningssystem och hur det skulle kunna utvecklas. Studierna har bedrivits genom en enkätundersökning med respondenter från konsultfirmor aktiva inom broprojektering, intervjuer med tekniska experter och litteratursökningar. Resultatet visar att projekteringen av broar idag huvudsakligen görs genom byggnads-informationsmodellering (BIM). Inriktningen är mot byggskedet där samordning och kommunikation bedöms vara de största nyttorna. Överlämningen till förvaltningen består dock av relationsritningar i formen av enkla ritningsfiler. Trots att Trafikverkets strategi för BIM beskriver att en informationsmodell bör leva kvar under hela brons livslängd, finns det tveksamheter huruvida en modell från projekteringen är lämplig som förvaltningsmodell. Istället lyfts andra metoder fram för att skapa en modell av det byggda utförandet. Till exempel optiska metoder för skanning och fotogrammetri. Förvaltningssystemen bör utvecklas med funktioner för att lagra och tillgängliggöra stora mängder digital information från sensorer maskinella inspektioner. Syftet är att minska osäkerheterna i det byggda utförandet och graden av nedbrytning, för att slutligen skapa ett bättre underlag för beslut om åtgärder. Ett framtida scenario är en digital tvilling som speglar den verkliga konstruktionen och uppdateras kontinuerligt genom sensordata. Gällande hårdvara för mätningar behöver sensorer och system utvecklas med avseende på energiförbrukning, energiskördning och underhållsåtgärder, t.ex. genom kombinationer av utbytbara komponenter med kort livslängd och andra delar med lång livslängd. Fiberoptiska sensorer visar på lovande egenskaper men utveckling behövs för att göra dem mer kostnadseffektiva i relation till konventionella sensorer.
Information-centric networks (ICNs) intrinsically support multipath transfer and thus have been seen as an exciting paradigm for IoT and edge computing, not least in the context of 5G mobile networks. One key to ICN's success in these and other networks that have to support a diverse set of services over a heterogeneous network infrastructure is to schedule traffic over the available network paths efficiently. This paper presents and evaluates ZQTRTT, a multipath scheduling scheme for ICN that load balances bulk traffic over available network paths and schedules latency-sensitive, non-bulk traffic to reduce its transfer delay. A new metric called zero queueing time (ZQT) ratio estimates path load and is used to compute forwarding fractions for load balancing. In particular, the paper shows through a simulation campaign that ZQTRTT can accommodate the demands of both latency-sensitive and-insensitive traffic as well as evenly distribute traffic over available network paths.
Intermittently powered embedded devices ensure forward progress of programs through state checkpointing in non-volatile memory. Checkpointing is, however, expensive in energy and adds to the execution times. To minimize this overhead, we present DICE, a system that renders differential checkpointing profitable on these devices. DICE is unique because it is a software-only technique and efficient because it only operates in volatile main memory to evaluate the differential. DICE may be integrated with reactive (Hibernus) or proactive (MementOS, HarvOS) checkpointing systems, and arbitrary code can be enabled with DICE using automatic code-instrumentation requiring no additional programmer effort. By reducing the cost of checkpoints, DICE cuts the peak energy demand of these devices, allowing operation with energy buffers that are one-eighth of the size originally required, thus leading to benefits such as smaller device footprints and faster recharging to operational voltage level. The impact on final performance is striking: with DICE, Hibernus requires one order of magnitude fewer checkpoints and one order of magnitude shorter time to complete a workload in real-world settings.
Transiently powered computers (TPCs) form the foundation of the battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This kind of power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption. Such a deceptively minor observation is overlooked in existing literature. Systems are thus designed and parameterized in overly conservative ways, missing on a number of optimizations.We rather demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and prove its use in two settings. First, we develop EPIC, a compile-time energy analysis tool. We use it to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, it avoids unnecessary program changes that hurt energy efficiency. Next, we extend the MSPsim emulator and explore its use in parameterizing a different TPC system support. The improvements in energy efficiency yield up to more than 1000% time speedup to complete a fixed workload.
Residual stresses created during the packaging process can adversely affect the reliability of electronics components. We used incremental hole-drilling method, following the ASTM E 837-20 standard, to measure packaging induced residual stresses in discrete packages of power electronics components. For this purpose, we bonded a strain gauge on the surface of a Gallium Nitride (GaN) power component, drilled a hole through the thickness of the component in several incremental steps, recorded the relaxed strain data on the sample surface using the strain gauge, and finally calculated the residual stresses from the measured strain data. The recorded strains and the residual stresses are related by the compliance coefficients. For the hole drilling method in the isotropic materials, the compliance coefficients are calculated from the analytical solutions, and available in the ASTM standard. But for the orthotropic multilayered components typically found in microelectronics assemblies, numerical solutions are necessary. We developed a subroutine in ANSYS APDL to calculate the compliance coefficients of the hole drilling test in the molded and embedded power electronics components. This can extend the capability of the hole drilling method to determine residual stresses in more complex layered structures found in electronics.
Additive manufacturing (AM) of large-scale polymer and composite parts using robotic arms integrated with extruders has received significant attention in recent years. Despite the contributions of great technical progress and material development towards optimizing this manufacturing method, different failure modes observed in the final printed products have hindered its application in producing large engineering structures used in aerospace and automotive industries. We report failure modes in a variety of printed polymer and composite parts, including fuel tanks and car bumpers. Delamination and warpage observed in these parts originate mostly from thermal gradients and residual stresses accumulated during material deposition and cooling. Because printing large structures requires expensive resources, process simulation to recognize the possible failure modes can significantly lower the manufacturing cost. In this regard, accurate prediction of temperature distribution using thermal simulations is the first step. Finite element analysis (FEA) was used for process simulation of large-scale robotic AM. The important steps of the simulation are presented, and the challenges related to the modeling are recognized and discussed in detail. The numerical results showed reasonable agreement with the temperature data measured by an infrared camera. While in small-scale extrusion AM, the cooling time to the glassy state is less than 1 s, in large-scale AM, the cooling time is around two orders of magnitudes longer. © 2022 by the authors
Compared with silicon-based power devices, wide band gap (WBG) semiconductor devices operate at significantly higher power densities required in applications such as electric vehicles and more electric airplanes. This necessitates development of power electronics packages with enhanced thermal characteristics that fulfil the electrical insulation requirements. The present research investigates the feasibility of using ceramic additive manufacturing (AM), also known as three-dimensional (3D) printing, to address thermal and electrical requirements in packaging gallium nitride (GaN) based high-electron-mobility transistors (HEMTs). The goal is to exploit design freedom and manufacturing flexibility provided by ceramic AM to fabricate power device packages with a lower junction-to-ambient thermal resistance (<italic>R</italic>θJA). Ceramic AM also enables incorporation of intricate 3D features into the package structure in order to control the isolation distance between the package source and drain contact pads. Moreover, AM allows to fabricate different parts of the packaging assembly as a single structure to avoid high thermal resistance interfaces. For example, the ceramic package and the ceramic heatsink can be printed as a single part without any bonding layer. Thermal simulations under different thermal loading and cooling conditions show the improvement of thermal performance of the package fabricated by ceramic AM. If assisted by an efficient cooling strategy, the proposed package has the potential to reduce <italic>R</italic>θJA by up to 48%. The results of the preliminary efforts to fabricate the ceramic package by AM are presented, and the challenges that have to be overcome for further development of this manufacturing method are recognized and discussed.
Silicon carbide (SiC) power devices are steadily increasing their market share in various power electronics applications. However, they require low-inductive packaging in order to realize their full potential. In this research, low-inductive layouts for half-bridge power modules, using a direct bonded copper (DBC) substrate, that are suitable for SiC power devices, were designed and tested. To reduce the negative effects of the switching transients on the gate voltage, flexible printed circuit boards (PCBs) were used to interconnect the gate and source pins of the module with the corresponding pads of the power chips. In addition, conductive springs were used as low inductive, solder-free contacts for the module power terminals. The module casing and lid were produced using additive manufacturing, also known as 3D printing, to create a compact design. It is shown that the inductance of this module is significantly lower than the commercially available modules.
Double sided modules accommodating wide band gap (WBG) devices are increasingly used in electric vehicles owing to their lower thermal resistance and parasitic inductances. Compared with single sided modules having a single ceramic substrate, the mechanical constraint applied on the silver sintered bonding layers in double sided modules (with two ceramic substrates) poses a more challenging reliability issue. In this work, we develop a parametric model to investigate the effects of layout, geometry and material properties on damage distribution in silver sintered layers of double sided modules. Anand viscoplastic model was used to describe the inelastic deformation of sintered silver under power cycling. Equivalent inelastic strain accumulated in each power cycle was used as the damage parameter and failure criterion. The model enables parametric study of damage distribution in double sided modules, and help improve design for maximum reliability. Using this model, the effects of parameters such as spacer and die thicknesses were investigated in this study.
Soft composite actuators can be fabricated by embedding shape memory alloy (SMA) wires into soft polymer matrices. Shape retention and recovery of these actuators are typically achieved by incorporating shape memory polymer segments into the actuator structure. However, this requires complex manufacturing processes. This work uses multimaterial 3D printing to fabricate composite actuators with variable stiffness capable of shape retention and recovery. The hinges of the bending actuators presented here are printed from a soft elastomeric layer as well as a rigid shape memory polymer (SMP) layer. The SMA wires are embedded eccentrically over the entire length of the printed structure to provide the actuation bending force, while the resistive wires are embedded into the SMP layer of the hinges to change the temperature and the bending stiffness of the actuator hinges via Joule heating. The temperature of the embedded SMA wire and the printed SMP segments is changed sequentially to accomplish a large bending deformation, retention of the deformed shape, and recovery of the original shape, without applying any external mechanical force. The SMP layer thickness was varied to investigate its effect on shape retention and recovery. A nonlinear finite element model was used to predict the deformation of the actuators.
Four-dimensional (4D) printed structures fabricated from shape memory polymers (SMPs) are typically one-way actuators, that is, for each actuation cycle, they must be programmed to deform from the original (as-printed) shape to a secondary (programmed) shape. This is done by applying a combination of thermal and mechanical loads. Then, they restore the initial shape during the actuation process by applying a thermal load. Here, we generalize this concept to fabricate two-way actuators by embedding shape memory alloy (SMA) wires into the printed SMP structures. To explain this in greater detail, we describe the printing process of a two-way bending actuator whose bilayer hinges consist of stiff SMPs as well as elastomers with low modulus. Joule heating was employed to modulate the hinges bending stiffness. To this end, electrical current was applied to the resistive wires inserted into the hinges SMP layer to control their temperature. On the other hand, thermomechanical programming of the SMA wires, which were integrated into the actuator, provided the bending actuation force. The fabricated actuator was able to bend, maintain the deformed shape, and recover the as-fabricated shape in a fully automated manner. Further potentials of this design methodology were assessed using a nonlinear finite element model. The model incorporated user-defined subroutines to incorporate complex material behaviors of SMAs and SMPs.
The Concordance Index (C-index) is a commonly used metric in Survival Analysis for evaluating the performance of a prediction model. In this paper, we propose a decomposition of the C-index into a weighted harmonic mean of two quantities: one for ranking observed events versus other observed events, and the other for ranking observed events versus censored cases. This decomposition enables a finer-grained analysis of the relative strengths and weaknesses between different survival prediction methods. The usefulness of this decomposition is demonstrated through benchmark comparisons against classical models and state-of-the-art methods, together with the new variational generative neural-network-based method (SurVED) proposed in this paper. The performance of the models is assessed using four publicly available datasets with varying levels of censoring. Using the C-index decomposition and synthetic censoring, the analysis shows that deep learning models utilize the observed events more effectively than other models. This allows them to keep a stable C-index in different censoring levels. In contrast to such deep learning methods, classical machine learning models deteriorate when the censoring level decreases due to their inability to improve on ranking the events versus other events.
Context: Free and Open Source Software (FOSS) communities’ ability to stay viable and productive over time is pivotal for society as they maintain the building blocks that digital infrastructure, products, and services depend on. Sustainability may, however, be characterized from multiple aspects, and less is known how these aspects interplay and impact community outputs, and software quality specifically. Objective: This study, therefore, aims to empirically explore how the different aspects of FOSS sustainability impact software quality. Method: 16 sustainability metrics across four categories were sampled and applied to a set of 217 OSS projects sourced from the Apache Software Foundation Incubator program. The impact of a decline in the sustainability metrics was analyzed against eight software quality metrics using Bayesian data analysis, which incorporates probability distributions to represent the regression coefficients and intercepts. Results: Findings suggest that selected sustainability metrics do not significantly affect defect density or code coverage. However, a positive impact of community age was observed on specific code quality metrics, such as risk complexity, number of very large files, and code duplication percentage. Interestingly, findings show that even when communities are experiencing sustainability, certain code quality metrics are negatively impacted. Conclusion: Findings imply that code quality practices are not consistently linked to sustainability, and defect management and prevention may be prioritized over the former. Results suggest that growth, resulting in a more complex and large codebase, combined with a probable lack of understanding of code quality standards, may explain the degradation in certain aspects of code quality.
Large-scale flood risk assessment is essential in supporting national and global policies, emergency operations and land-use management. The present study proposes a cost-efficient method for the large-scale mapping of direct economic flood damage in data-scarce environments. The proposed framework consists of three main stages: (i) deriving a water depth map through a geomorphic method based on a supervised linear binary classification; (ii) generating an exposure land-use map developed from multi-spectral Landsat 8 satellite images using a machine-learning classification algorithm; and (iii) performing a flood damage assessment using a GIS tool, based on the vulnerability (depth-damage) curves method. The proposed integrated method was applied over the entire country of Romania (including minor order basins) for a 100-year return time at 30-m resolution. The results showed how the description of flood risk may especially benefit from the ability of the proposed cost-efficient model to carry out large-scale analyses in data-scarce environments. This approach may help in performing and updating risk assessments and management, taking into account the temporal and spatial changes in hazard, exposure, and vulnerability.
En generisk, flexibel simuleringsmodell utvecklas med syftet att kunna bidra till förståelse samt ge möjligheter att enkelt testa vad elektrifiering (batterielektriskt) av önskade flygtrafikflöden kan förväntas innebära i form av krav på laddinfrastruktur vid flygplatserna. Modellen utvecklas i programspråket Python och innehåller ett flertal olika tillvägagångsätt för att testa elektrifiering såväl baserat på inläsning av historiska flygtrafikdata, som skapande av nya, icke-existerande flygtrafikscheman för elflyg. Eftersom det i dagsläget inte finns några elflygplan i kommersiell linjetrafik, och således inte heller någon data eller statistik gällande dess prestanda eller egenskaper, så utvecklas en modell även för detta, vilken tillåter simulering av önskade flygförbindelser, och resulterar i erhållande av energiförbrukning och flygtid på dessa. Projektet utgår ifrån en elflygplansmodell som är parametersatt i enlighet med certifieringsnivå CS/FAR-23 (19 säten och maxvikt 8618 kg). Logiken i modellen är att följa den fullständiga rörelsekedjan för varje flygplansindivid under en given tidsperiod (typiskt ett dygn), där behovet av laddning för respektive flygplan på respektive flygplats i kedjan ges av vilken energinivå batteriet höll vid påbörjad flygning, hur mycket energi som förbrukades under flygningen, när flygplanet anländer till destination, samt när det behöver påbörja nästa flygning. Även in- och uttaxning på flygplatserna påverkar hur mycket tid som finns tillgänglig för laddning. En inbyggd laddningskurva begränsar hur snabbt det är praktiskt lämpligt för batteriet att laddas. Laddningskurvan definieras genom ett förhållande mellan C-rate (Charging-rate) och SoC (State-of-Charge). Dessutom kan laddare i sig begränsas till en viss maxeffekt och styr således hur snabbt energi kan levereras till flygplanets batterier. För att möjliggöra tillräcklig räckvidd förväntas elflygplanen ha relativt stora batterier som dessutom sannolikt ska laddas upp inom korta tidsintervall på flygplatserna (turnaround-tider). Därmed kan behovet av installerad effektkapacitet förväntas öka drastiskt på flygplatserna om flera elflygplan behöver ladda samtidigt. Projektet lägger därför lite extra vikt vid att utveckla smarta algoritmer för styrning av effektuttag över tid med ambitionen att lastbalansera och sänka effekttoppar vid simultan laddning. Till sist diskuterar projektet vilka implikationer elflyg kan medföra ur perspektivet flygtrafikledning, befintliga och framtida luftrumsstrukturer. Ett flertal fallstudier genomförs för att exemplifiera modelleringsprocessen och de resultat som användaren slutligen får. Projektet syftar inte till att skapa något färdigt kommersiellt verktyg, utan snarare en första version, samt lägga grunden för vidareutveckling av ett analysverktyg som är till nytta för flygplatser och andra aktörer inom flygbranschen nu, och i framtida forskning- och utvecklingssamarbeten.
This paper presents the results of the MODELflyg research project funded by the Swedish Transport Administration to gain more knowledge about ground charging infrastructure demand for the electrification of air traffic. An integrated simulation model was developed including flight traffic data processing, modelling of battery electric aircraft performance, and charging simulations. Several different options are available to select specific air traffic flows of interest, including scheduling algorithms for electric aviation adapted timetables. Furthermore, a smart-charging algorithm was developed to lower peak power demand at each airport from simultaneous charging of multiple electric aircraft.
Route-based simulated traffic data for high-resolution analysis of heavy goods transport on the Swedish road network In this report, a national database has been created regarding freight transport with heavy road vehicles. The primary purpose of the work is to serve as input for further analysis of what appropriate charging infrastructure planning and placement should look like given the knowledge of the transport work. It has thus been no ambition to give any recommendations in this report about, for example, expansion of charging infrastructure, but rather to collect and process information/data as well as develop methods and finally generate a data set that is useful and well representative of the traffic on the national road network. By the time of this publication, a dataset is available based on data from the Swedish Transport Administration’s Samgods-model with its simulations of transport connections based on transport demand between producer and consumer zones. In addition, all transport connections have been translated into routes (how trucks drive from A to B) on the road network, to enable analysis of electrification of/at specific road segments. Finally, the dataset has also been calibrated in various ways to better match statistics and actual measurements, as some major differences/deviations compared to some of them were identified. What the data set now consists of can be summarized as the number of truck movements and tons of goods that annually pass each road segment of the Swedish road network (and on some foreign roads). Furthermore, these totals can be easily divided into subsets and linked to specific routes, types of trucks (weight classes), origin, etcetera. Some shortcomings/limitations have been noticed during the production of this data set, such as the fact that the Samgods-model seems to miss a lot of transport in metropolitan areas, that the routing carried out by all flows is not completely perfect (which has partly to do with requests from OpenStreetMap), that the methods for generating new routes based on population density within municipalities are unlikely to be fully representative of where the transport is going, or that the data itself is based on a simulation model that tries to optimize which type of transport should be used to meet which demand. A couple of additional things may be worth clarifying: (1) The data only tells the number of transports or shipped goods between start and end nodes. Thus, there is no way to determine what the movement pattern of individual vehicle individuals looks like between routes, nor when in time each transport is performed. (2) The data only includes freight transport, and thus "misses" for example all passenger car traffic, which should also be seen as potential users of the charging infrastructure and thus be included in the calculations in the future. It would therefore be interesting to include these in some way in the next step.
Active mobility, such as biking, faces a common challenge in Swedish municipalities due to the lack of adequate lighting during the dark winter months. Insufficient lighting infrastructure hinders individuals from choosing bicycles, despite the presence of well-maintained bike paths and a willingness to cycle. To address this issue, a project has been undertaken in the Swedish municipality of Skara for an alternative lighting solution using drones. A series of tests have been conducted based on drone prototypes developed for the selected bike paths. Participants were invited to cycle in darkness illuminated by drone lighting and share their mobility preferences and perception. This paper summarizes the users’ perception of drone lighting as an alternative to fixed lighting on bike paths, with a special focus on the impact on travel habits and the perceived sense of security and comfort. Most participants were regular cyclists who cited bad weather, time, and darkness as significant factors that deterred them from using bicycles more frequently, reducing their sense of security. With drone lighting, the participants appreciated the illumination’s moonlight-like quality and its ability to enhance their sense of security by illuminating the surroundings. On the technology side, they gave feedback on reducing the drone’s sound and addressing lighting stability issues. In summary, the test results showcase the potential of drone lighting as a viable alternative to traditional fixed lighting infrastructure, offering improved traffic safety, sense of security, and comfort. The results show the feasibility and effectiveness of this innovative approach, supporting transformation towards active and sustainable mobility, particularly in regions facing lighting challenges.
Electric Road System (ERS) is a technology concept that has the potential to dramatically reduce the fossil fuel dependency in the transport system. ERS is defined by electric power transfer from the road to the vehicle while the vehicle is in motion, and could be achieved through different power transfer technologies from the road to the vehicle, such as rail, overhead line, and wireless solutions. The basic technologies for power transfer from the road to vehicles in motion have been developed through various international research projects. In recent years, ERS has moved from conceptual idea to real-world application in countries such as Sweden (2016 and 2018), the United States of America (California 2017), and Germany (2019). In addition, projects are being planned in Italy and China.
National and international freight transports in Europe are usually determined by national and EU strategies and regulations. The success of ERS implementation, especially when it comes to a transnational roll-out, depends on using regulatory frameworks to identify areas where adaptation is needed.
The work in the CollERS project has included a consideration of ERS in national and EU transport strategies. The present report relates to identification of areas where standards are missing or have to be adapted, as well a stakeholder dialogue (Germany, Sweden, Denmark and EU), e.g. by means of expert interviews at national and EU-level (industry, science, politics, and road administrations).
Recent advances in Deep Learning have led to a significant performance increase on several NLP tasks, however, the models become more and more computationally demanding. Therefore, this paper tackles the domain of computationally efficient algorithms for NLP tasks. In particular, it investigates distributed representations of n -gram statistics of texts. The representations are formed using hyperdimensional computing enabled embedding. These representations then serve as features, which are used as input to standard classifiers. We investigate the applicability of the embedding on one large and three small standard datasets for classification tasks using nine classifiers. The embedding achieved on par F_1 scores while decreasing the time and memory requirements by several times compared to the conventional n -gram statistics, e.g., for one of the classifiers on a small dataset, the memory reduction was 6.18 times; while train and test speed-ups were 4.62 and 3.84 times, respectively. For many classifiers on the large dataset, memory reduction was ca. 100 times and train and test speed-ups were over 100 times. Importantly, the usage of distributed representations formed via hyperdimensional computing allows dissecting strict dependency between the dimensionality of the representation and n-gram size, thus, opening a room for tradeoffs.
Projektet "Stationssamhällen Småland: Verktygslåda för landsbygdsmobilitet" tog sikte på att möta utmaningar i relation till minskat bilresande i syfte att se till att Agenda 2030- målen kan uppnås samtidigt som tillgänglighet bibehålls. Fokus lades på mindre tätorter med tågstationer. Den övergripande målsättningen var att projektets resultat ska underlätta för kommuner att identifiera och implementera hållbara mobilitetslösningar som möter såväl invånares som näringslivets behov i den typen av samhällen. Projektet involverade fem småländska kommuner. Genom en fallstudiebaserad ansats, med metoder så som platsbesök, intervjuer och workshops med representanter för lokalsamhället och näringslivet kartlades behov, en färdplan för nya mobilitetstjänster togs fram och en verktygslåda som kommunerna kan använda för att själva planera och implementera dessa utvecklades. Projektet resulterade i en verktygslåda, presenterad som en Wiki-hemsida, som erbjuder en process för nulägesanalys, behovskartläggning, kunskapsuppbyggnad, idégenerering och implementering av mobilitetslösningar. Verktygslådan är avsedd att underlätta för andra kommuner att självständigt förbättra mobiliteten baserat på lokala behov. Insikter från projektet inkluderar betydelsen av brett stöd inom kommunen, näringslivets deltagande som en katalysator för förändring, och behovet av att utgå från specifika målgruppers behov. Projektet framhåller även vikten av mjuka åtgärder, kostnadseffektiva lösningar, och samarbete över kommungränser för att förbättra pendlingsresor. Genom att fokusera på marknadsföring av befintlig kollektivtrafik, optimering av kollektivtrafik, och olika former av samordnad mobilitet, inklusive cykling och samåkning, presenteras konkreta lösningar för ökad tillgänglighet och hållbar mobilitet på landsbygden.
Den här texten presenterar resultat från tre aktiviteter för kunskapsinsamling om implementering av digitala verktyg inom vård- och omsorg: en litteratursammanställning, en workshopserie i två delar och en enkätundersökning. Den primära målgruppen har varit verksamhetsutvecklare och projektledare i regioner och kommuner. Antalet svarande i enkätstudien var för lågt för att kunna dra några statistiska slutsatser, men resultatet kan, tillsammans med workshopserien, ändå användas för att identifiera områden där det verkar finnas utmaningar. Både litteratursammanställningen och enkätundersökningen pekar på ett behov av att utveckla strukturerade utvärderingsmodeller för implementering. Även under workshoppen diskuterades den bristande förmågan att ”samla in evidens under projektens gång”, och det fanns en önskan om att ett sådant arbetssätt skulle utvecklas. Enkäten och workshoppen pekar även på flera återkommande problem under implementeringsprojekts olika faser. I den inledande fasen efterfrågas bättre analyser och förankringsarbete. Analyser med användarfokus missas speciellt ofta, så som användarresa, hållbarhetsanalys och intressent- och behovsanalys. Även i andra ändan av processen, då system och arbetssätt ska avvecklas, finns utmaningar och förbättringsförslag, t.ex. saknas ofta beslut om utfasning av gamla lösningar, och ett förslag är att man redan när man skriver kontakt med en leverantör ska säkerställa att leverantören hjälper till med migrering vid utfasning. En annan genomgående problematik är osäkerhet, och ibland avsaknad, av roller, ansvar och kommunikation. Detta handlar om att man inte riktigt vet varför man ska göra saker, eller att verksamheten och personerna som ska göra förändringen inte är tillräckligt inblandade. Det kan också handla om att support och förvaltning inte är tillräckligt väl utvecklat, och om att man inte vet hur man kan samarbeta med leverantörer. Här identifieras även förändringsledning som ett viktigt verktyg för att underlätta en god implementering.
In this work, we report on a twin-core fiber sensor system that provides improved spectral efficiency, allows for multiplexing and gives low level of crosstalk. Pieces of the referred strongly coupled multicore fiber are used as sensors in a laser cavity incorporating a pulsed semiconductor optical amplifier (SOA). Each sensor has its unique cavity length and can be addressed individually by electrically matching the periodic gating of the SOA to the sensor’s cavity roundtrip time. The interrogator acts as a laser and provides a narrow spectrum with high signal-to-noise ratio. Furthermore, it allows distinguishing the response of individual sensors even in the case of overlapping spectra. Potentially, the number of interrogated sensors can be increased significantly, which is an appealing feature for multipoint sensing. © 2022, The Author(s).
In this article, we report on a carbon-coated optical fiber that is suitable to be used simultaneously as a transmission medium and as a sensor. It consists of a standard single mode fiber (SMF) sleeved in two layers of coating, which provide protection and isolation from external elements. The inner layer is made of carbon, whereas the outer is made of polymer. When the fiber is subjected to mechanical stress, the electrical resistance of the carbon layer changes accordingly. The voltage variations caused by the former can be measured with high accuracy and without interfering with the light propagating through the SMF. In this work, the feasibility of this operating principle is demonstrated in a low coherence Michelson interferometer in which electrical and optical signals were measured simultaneously and compared to each other. Results indicate that electrical measurements are as precise as the optical ones and with linear behavior, reaching a sensitivity of 1.582 mV/με and able to detect vibrations down to 100 mHz. © 2024 The Authors
We benchmark the performance of segment-level metrics submitted to WMT 2023 using the ACES Challenge Set (Amrhein et al., 2022). The challenge set consists of 36K examples representing challenges from 68 phenomena and covering 146 language pairs. The phenomena range from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. For each metric, we provide a detailed profile of performance over a range of error categories as well as an overall ACES-Score for quick comparison. We also measure the incremental performance of the metrics submitted to both WMT 2023 and 2022. We find that 1) there is no clear winner among the metrics submitted to WMT 2023, and 2) performance change between the 2023 and 2022 versions of the metrics is highly variable. Our recommendations are similar to those from WMT 2022. Metric developers should focus on: building ensembles of metrics from different design families, developing metrics that pay more attention to the source and rely less on surface-level overlap, and carefully determining the influence of multilingual embeddings on MT evaluation.
To quickly identify maritime sites polluted by heavy metal contaminants, reductions in the size of instrumentation have made it possible to bring an X-ray fluorescence (XRF) analyzer into the field and in direct contact with various samples. The choice of source-sample-detector geometry plays an important role in minimizing the Compton scattering noise and achieving a better signal-to-noise ratio (SNR) in XRF measurement conditions, especially for analysis of wet sediments. This paper presents the influence of geometrical factors on a prototype, designed for in situ XRF analysis of mercury (Hg) in wet sediments using a 57Co excitation source and an X-ray spectrometer. The unique XRF penetrometer prototype has been constructed and tested for maritime wet sediment. The influence on detection efficiency and SNR of various geometrical arrangements have been investigated using the combination of Monte Carlo simulations and laboratory experiments. Instrument calibration was performed for Hg analysis by means of prepared wet sediments with the XRF prototype. The presented results show that it is possible to detect Hg by K-shell emission, thus enabling XRF analysis for underwater sediments. Consequently, the XRF prototype has the potential to be applied as an environmental screening tool for analysis of polluted sediments with relatively high concentrations (e.g., >2880 ppm for Hg), which would benefit in situ monitoring of maritime pollution caused by heavy metals. © 2022 The Authors
By combining the electrochromic (EC) properties of Prussian blue (PB) and poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS), complementary EC displays manufactured by slot-die coating and screen printing on flexible plastic substrates are reported. Various display designs have been realized, resulting in displays operating in either transmissive or reflective mode. For the transmission mode displays, the color contrast is enhanced by the complementary switching of the two EC electrodes PB and PEDOT:PSS. Both electrodes are either exhibiting a concurrent colorless or blue appearance. For the displays operating in reflection mode, a white opaque electrolyte is used in conjunction with the EC properties of PB, resulting in a display device switching between a fully white state and a blue-colored state. The developments of the different device architectures, that either operate in reflection or transmission mode, demonstrate a scalable manufacturing approach of all-printed EC displays that may be used in a large variety of Internet of Things applications. © 2022 The Authors.
Screen printed piezoelectric polyvinylidene fluoride?trifluoro ethylene (PVDF?TrFE)-based sensors laminated between glass panes in the temperature range 80?110?°C are presented. No degradation of the piezoelectric signals is observed for the sensors laminated at 110?°C, despite approaching the Curie temperature of the piezoelectric material. The piezoelectric sensors, here monitoring force impact in smart glass applications, are characterized by using a calibrated impact hammer system and standardized impact situations. Stand-alone piezoelectric sensors and piezoelectric sensors integrated on poly(methyl methacrylate) are also evaluated. The piezoelectric constants obtained from the measurements of the nonintegrated piezoelectric sensors are in good agreement with the literature. The piezoelectric sensor response is measured by using either physical electrical contacts between the piezoelectric sensors and the readout electronics, or wirelessly via both noncontact capacitive coupling and Bluetooth low-energy radio link. The developed sensor concept is finally demonstrated in smart window prototypes, in which integrated piezoelectric sensors are used to detect break-in attempts. Additionally, each prototype includes an electrochromic film to control the light transmittance of the window, a screen printed electrochromic display for status indications and wireless communication with an external server, and a holistic approach of hybrid printed electronic systems targeting smart multifunctional glass applications.