Hardware realization of safety functions, in safety related machinery control systems can, according to EN ISO 13849-1, be realized as one out of five distinct designated architectures. This report gives examples and guidance for choosing a designated architecture which fulfills the required risk reductive measure of the safety function.
Older wooden escape route doors are often a week point in the fire safety in apartment buildings. This report presents possible solutions for mounting of glass, protective boards and sealing lists when upgrading older wooden escape route doors with a cultural heritage value.
These types of doors normally have a thickness of 45-50 mm, with glass on the upper part, and a thinner wooden door-panel on the lower part. For antiquary reasons the interventions on the doors should be as little intrusive and as reversable as possible.
Different solutions for mounting of fire rated glass, gypsum boards and ceiling lists are shown in the report. The main conclusions are: • Glass must have fire resistance EI 30, and be securely fastened with steel frames or steel clips • Heavy use of doors implies mounting of 12,5 mm Robust gypsum boards • Both expanding lists and silicone gaskets must be mounted to stop cold and hot smoke • The area between the door frame and the wall must be sealed using mineral wool and expanding fire sealant
The solutions given in this report are preliminary solutions and will later be used as basis for small-scale and full-scale testing. The results from the fire testing will be given in a report describing finally necessary upgrading solutions in detail.
Despite much research and applications, glass material and its use in buildings is still challenging for engineers due to its inherent brittleness and characteristic features such as sensitivity to stress concentrations, reduction in strength over time and from temperature, and breakage due to the stresses that may build up because of thermal gradients. This paper presents the results of an original test series carried out on monolithic glass panes with the dimensions of 500 × 500 mm2 and different thicknesses, under the exposure to radiant heating. The research study also includes a one-dimensional (1D) heat transfer model and a numerical, three-dimensional (3D) thermo-mechanical model that are used to investigate in greater detail the phenomena observed during the experiments. As shown, the behaviour of glass under radiant heating is rather complex and confirms the high vulnerability of this material for building applications. The usability and potential of thermo-mechanical numerical models is discussed towards experimental feedback. © 2022 by the authors.
The present paper describes a method for non-destructive testing of the glass strength. Square 10 × 10 cm2 samples of annealed float glass was inflicted with a controlled defect in the centre of the atmospheric side using Vickers microindentation-induced cracking with a force of 2 N, 5 N and 10 N and compared to an un-indented reference. The samples were non-destructively tested using a nonlinear acoustic wave method resulting in defect values. The average of the defect values was found to linearly correlate to the indentation force in a log–log relationship. The samples were subsequently tested in a ring-on-ring setup that allows for an equibiaxial stress state. The indentation-induced cracking gave practically realistic strength values in the range of 45 to 110 MPa. The individual sample values for failure stress as a function of normalized defect value show linear trends with approximately half of the data within 95% confidence limit. In summary, this study provides an initial proof-of-concept for a non-destructive testing of the strength of glass.
RF power amplifier demonstrators containing each one GaN-on-SiC, HEMT, CHZ015AQEG, from UMS in SMD quad-flat no-leads package (QFN) were subjected to thermal cycles (TC) and power cycles (PC) followed by electrical, thermal and structural evaluation. Two types of solders i.e. Sn63Pb36Ag2 and lead-free SnAgCu (SAC305) and two types of TIM materials (NanoTIM and TgonTM 805) for PCB attachment to liquid cold plate were tested for thermomechanical reliability. Changes in electrical performance of the devices namely reduction of the current saturation value, threshold voltage shift, increase of the leakage current and degradation of the HF performance were observed as a result of an accumulated current stress during PC. No significant changes in the investigated solder or TIM materials were observed.
For practical decisions on common recurring maintenance actions, the information from routine inspections form a decision basis for the bridge manager. It is often difficult to assess whether this information is sufficient for deciding on a repair action, or if more information is needed. For many bridges the information for supporting decisions may be limited, although those bridges cause large yearly maintenance costs for the society. The purpose of the study presented in this paper is to show how two different models for decision making based on Bayesian decision theory, a point-in-time decision model and a sequential updating decision model, can be used to improve the decision-making process for common maintenance decisions. The models use information from routine inspections and incorporates time dependent aspects such as material degradation and time value of money to improve the decision-making process. The focus is on presenting the methodology with a case study of a concrete bridge in Sweden where the edge beams may have to be replaced. Three assessment approaches are considered: (i) no assessment, (ii) desktop evaluation and (iii) measurements. The main finding is that sequential updating decision making will provide a higher benefit than a point-in-time decision, and thus give higher Value of Information. This value becomes even higher when the measurements are selected for the assessment. The results also show that the edge beams should be replaced. The general approach presented can be applicable to many decision scenarios related to maintenance of deteriorating structures.
Purpose: Upcoming as well as mature industries are facing pressure as regards successfully managing operational excellence, and, at the same time, driving and managing innovation. Quality management concepts and practices’ ability to tackle this challenge have been questioned. It has even been suggested that there is a need to provide and promote an updated/changed, and even re-branded, version of Total Quality Management, merging quality management (QM) and innovation management (IM). Can such a shift then actually be spotted? The purpose of this paper is to explore and see if there are any signs suggesting that QM and IM actually are about to merge. Design/methodology/approach: The study is based on literature reviews, document studies and interviews. Findings: The paper highlights three signs indicating that QM and IM indeed are approaching each other, and that it is a movement driven from both sectors, e.g., in the work with new ISO-standards and the Toyota Kata framework. Originality/value: The indicated development has fundamental and extensive practical implications. It will for example have to be followed by a similar merging of the two fields in the educational system, and in the competences of future managers.
Purpose If not now, when? The UN Sustainable Development Goals (SDGs) are getting increasing attention, and there is an acknowledgement that the challenges ahead, as well as the solutions needed, often are complex. In contrast, the historical strengths of Quality Management (QM) have been in situations when cause-effect relations can be analyzed and understood, when technical expertise can provide the answer, where the application of “best practice” is helpful, where order is a virtue. When dealing with complexity, leaders who are tempted to impose this kind of command-and-control style will often fail. Success rather comes from setting the stage, stepping back a bit, allowing patterns to emerge, curiously tracking what takes place, spreading what is being learned, and scaling up success. Such a leadership and practice has been referred to as the fourth and called for “Emergence Paradigm” of QM. The purpose of this paper is to contribute with knowledge concerning how the Emergence Paradigm of QM comes into play when getting organizations and the world to take action on Agenda 2030. Methodology/Approach The paper is based on dialogic action research and presents the case and emergent process of 60 Swedish authorities getting to collective action on Agenda 2030. Findings The paper highlights how QM may contribute to realizing the Agenda 2030 by dynamically combining the strengths of the past paradigms with new practices and mind-sets related to complexity and emergence. Value of the paper The paper provides new insights that may help to take the bold and transformative steps urgently needed to shift the world onto a sustainable and resilient path.
“If not now, when?” The UN’s sustainable development goals (SDGs) are gaining increasing attention, and there is wide acknowledge-ment that the challenges ahead, as well as the solutions needed, are often complex. In contrast, the historical strength of Quality Management (QM) lies in situations when the cause–effect relations can be analysed and understood, when technical expertise can provide the answer, where the application of “best practice” is helpful, and where order is a virtue. When dealing with complexity, leaders who tend to impose this kind of command-and-control style will often fail. Success, rather than this, comes from setting the stage, stepping back a bit, allowing patterns to emerge, curiously tracking what takes place, spreading what was learned, and scaling up success. Such aleadership and practice has been referred to as the fourth and called for “Emergence Paradigm” of QM. The purpose of this paper is to contribute with knowledge about how the Emergence Paradigm of QM comes into play when getting organisations and the world to take action on the 2030 Agenda. This paper is conceptual but includes experiences from dialogic action research in an emergent process of 60 Swedish authorities getting into collective action on the 2030 Agenda. As a result, the paper highlights how QM may contribute to realising the 2030 Agenda by dynamically combining the strengths of the past QM paradigms with new practices and mindsets related to complexity and emergence. It also provides new insights that may help when applying QM to take the bold and transformative steps urgently needed to shift the world onto a sustainable and resilient path.
Two Swedish researchers and practitioners use a wellknown fairy tale to bring perspective to the start and development of an AI community in mid-Sweden. They decided to use the fairy tale of the Three Billy Goats Gruff as a base and metaphor for sharing their story.
Purpose Appreciative Inquiry (AI) is an inquiry into the “best of” what already exists in a system. Applying AI at the start of a design process gives you a process that is very different from traditional design approaches. Simply put, you obtain a design process that has as its point of departure identified strengths: “the best of what is”. This is in sharp contrast to traditional design processes that typically starts from identified problems. You obtain a design process based on the best of what is, in other words you achieve “Appreciative Design”. The overall purpose of this paper is to explore and contribute to a process of putting Appreciative Design into practice. More specifically, the paper aims at introducing a process for Appreciative Design to be used in the development of higher education together with insights from applying it in practice. Methodology/approach The methodology chosen can be described as “action research”. The researchers have, in their role as educational leaders, developed and applied a process for Appreciative Design within the context of the entrepreneurial educational program “Skarp Åre - Business and product development” at Mid Sweden University. Findings The paper introduces a process for Appreciative Design to be used in the development of higher education together with insights from applying it in practice. The process introduced is referred to as Appreciative Course Evaluation and Design (ACED). Furthermore, applying the ACED process to the “Skarp Åre – Business and product development” educational program reveals a number of benefits in comparison to conventional practice. The benefits found include higher commitment by the course participants, lower risk in the design process, and increased student involvement in the evaluation and design process. Value of the paper The paper contributes in general to increasing the understanding of how the strengths and principles of Appreciative Inquiry can be incorporated into design processes. The case study performed also contributes new insights into how and why to apply the introduced ACED process to the evaluation and design of higher education. Our hope is that the insights presented will inspire future research and application of Appreciative Design not only to the evaluation and design of higher education, but also to the evaluation and design of products, services, organizations and society.
Purpose The purpose of this paper is to contribute with knowledge concerning how to drive, generate and energize change and development in social systems. A potential key to meet this challenge is the strength-based change management approach called appreciative inquiry (AI). A centralcomponent of AI is the “AI interview”, which has evolved into a distinct activity that enables the past and the future to be used as a generative source for on-going learning about strengths, opportunities,aspirations and results. The AI interview has in previous studies shown an often surprisingly high ability to generate development and change in social systems. However, the understanding of the generative “mystery” of the AI interview, focusing on the value experienced by both the people conducting the interview and those being interviewed, is still in need of further exploration. Furthermore, the evident generativity of the AI interview has not yet been integrated to any large extent into quality management. The purpose of this paper is to change that. Design/methodology/approach The researchers have studied the customer experience of conducting an AI interview based on feedback from 97 AI students at Mid Sweden University. Findings Among the results, eight categories of value are identified. Originality/value The paper contributes with new knowledge concerning the values experienced during participation in an AI interview. The paper also highlights ideas on how the generativity of the AI interview could be increasingly integrated into quality management.
The paper presents basis and experimental results of a non-destructive method aimed at determination of the presence of large surface cracks in glass samples by measurements with NAW® technology (Nonlinear Acoustic Wave). The method is based on a transmitted ultrasonic wave in the material from which the non-linear content of the signal can be analysed. A sample containing defects presents nonlinearities in the form of distortions, such as, higher order harmonics that are detected. Nonlinearities in the signal are primarily formed at crack-tips and the number of nonlinearities is proportional to the amount of damage, or defects, in the sample that is investigated. The result of the measurement and evaluation, that only takes a few seconds, is a damage value that is easy to understand and to use for immediate application. A number of preliminary test results and comparisons with destructive testing for various test setups, as well as a recent test strategy including fabricated defects with a nanoindenter will be discussed.
Uncertainty intervals for many measurement results are typically reported as symmetric intervals around the measured value. However, at large standard uncertainties (> approx. 15 %–20 %), it is necessary to consider asymmetry of the uncertainty intervals. Here, an expression for calculating uncertainty intervals handling asymmetry when the relative standard uncertainty is independent of the measurand level is presented. The expression is based on implementation of a power transformation (xB) for transformation of measurement results in order to achieve results that have a symmetric and approximate normal distribution. Uncertainty intervals are then calculated in the transformed space and back-transformed to the original space. The transformation includes a parameter, B, that needs to be optimized, and this can be based on real results, modelling of results, or on judgement. Two important reference points are B equal to 1 that corresponds to an approximate normal distribution of the original measurement results, and B approaching 0 that corresponds to an approximate log-normal distribution of the original measurement results. Comparisons are made with uncertainty intervals calculated using other expressions where it is assumed that measurement results have a normal distribution or a log-normal distribution. Implementation of the approach is demonstrated with several examples from chemical analysis. © 2022, The Author(s).
The method Variation and Mode Effect Analysis (VMEA) is successfully implemented for the AGMA based gear design of the rack pinion mechanism. The rack and pinion is a feature in Ocean Harvesting Technologies (OHT) gravity accumulator device. The purpose of it is to make the electrical power output to the grid more uniform. This is a novel technology where previous experience in designing is absent. The VMEA method is there for useful for incorporating all known uncertainties to estimate the uncertainty and reliability of the technology. This allows for adequate safety factors to be set so the desired reliability can be achieved.
The uncertainty and reliability analysis is performed for different OHT designs and methods where the reliability is calculated. This calculation can be used as basis for further analysis when more design details are determined and modifications are made, thus allowing for more optimized and reliable design to be made.
Fire is a major threat in the petroleum industry. However, little has been published about the fire related incidents that have occurred in the Norwegian petroleum sector. To gain more knowledge, data from 985 incidents in the 1997 - 2014 period has been analysed. Examples of factors studied are type of facility involved, involved area or system, consequences and severity level. The analysis of the fire incidents reveals that even though many incidents are reported, the large majority of these have not imposed risks for severe fire accidents. It has also provided valuable information regarding possible dangerous situations, commonly in-volved areas, types of equipment as well as types of activity that were involved. Twenty-nine percent of the incidents were false alarms, which must be regarded as a high number in an industry where any production stop could be extremely costly.
Canonical deep learning-based Remaining Useful Life (RUL) prediction relies on supervised learning methods which in turn requires large data sets of run-to-failure data to ensure model performance. In a large class of cases, run-to-failure data is difficult to collect in practice as it may be expensive and unsafe to operate assets until failure. As such, there is a need to leverage data that are not run-to-failure but may still contain some measurable, and thus learnable, degradation signal. In this paper, we propose utilizing self-supervised learning as a pretraining step to learn representations of the data which will enable efficient training on the downstream task of RUL prediction. The self-supervised learning task chosen is time series sequence ordering, a task that involves constructing tuples each consisting of $n$ sequences sampled from the time series and reordered with some probability $p$. Subsequently, a classifier is trained on the resulting binary classification task; distinguishing between correctly ordered and shuffled tuples. The classifier's weights are then transferred to the RUL-model and fine-tuned using run-to-failure data. We show that the proposed self-supervised learning scheme can retain performance when training on a fraction of the full data set. In addition, we show indications that self-supervised learning as a pretraining step can enhance the performance of the model even when training on the full run-to-failure data set. To conduct our experiments, we use a data set of simulated run-to-failure turbofan jet engines.
Remaining useful life prediction models are a central aspect of developing modern and capable prognostics and health management systems. Recently, such models are increasingly data-driven and based on various machine learning techniques, in particular deep neural networks. Such models are notoriously “data hungry”, i.e., to get adequate performance of such models, a substantial amount of diverse training data is needed. However, in several domains in which one would like to deploy data-driven remaining useful life models, there is a lack of data or data are distributed among several actors. Often these actors, for various reasons, cannot share data among themselves. In this paper a method for collaborative training of remaining useful life models based on federated learning is presented. In this setting, actors do not need to share locally held secret data, only model updates. Model updates are aggregated by a central server, and subsequently sent back to each of the clients, until convergence. There are numerous strategies for aggregating clients’ model updates and in this paper two strategies will be explored: 1) federated averaging and 2) federated learning with personalization layers. Federated averaging is the common baseline federated learning strategy where the clients’ models are averaged by the central server to update the global model. Federated averaging has been shown to have a limited ability to deal with non-identically and independently distributed data. To mitigate this problem, federated learning with personalization layers, a strategy similar to federated averaging but where each client is allowed to append custom layers to their local model, is explored. The two federated learning strategies will be evaluated on two datasets: 1) run-to-failure trajectories from power cycling of silicon-carbide metal-oxide semiconductor field-effect transistors, and 2) C-MAPSS, a well-known simulated dataset of turbofan jet engines. Two neural network model architectures commonly used in remaining useful life prediction, long short-term memory with multi-layer perceptron feature extractors, and convolutional gated recurrent unit, will be used for the evaluation. It is shown that similar or better performance is achieved when using federated learning compared to when the model is only trained on local data.