Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 76) Show all publications
Helali Moghadam, M., Borg, M., Saadatmand, M., Mousavirad, S., Bohlin, M. & Lisper, B. (2024). Machine learning testing in an ADAS case study using simulation-integrated bio-inspired search-based testing. Journal of Software: Evolution and Process (5), Article ID e2591.
Open this publication in new window or tab >>Machine learning testing in an ADAS case study using simulation-integrated bio-inspired search-based testing
Show others...
2024 (English)In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, no 5, article id e2591Article in journal (Refereed) Published
Abstract [en]

This paper presents an extended version of Deeper, a search-based simulation-integrated test solution that generates failure-revealing test scenarios for testing a deep neural network-based lane-keeping system. In the newly proposed version, we utilize a new set of bio-inspired search algorithms, genetic algorithm (GA), (Formula presented.) and (Formula presented.) evolution strategies (ES), and particle swarm optimization (PSO), that leverage a quality population seed and domain-specific crossover and mutation operations tailored for the presentation model used for modeling the test scenarios. In order to demonstrate the capabilities of the new test generators within Deeper, we carry out an empirical evaluation and comparison with regard to the results of five participating tools in the cyber-physical systems testing competition at SBST 2021. Our evaluation shows the newly proposed test generators in Deeper not only represent a considerable improvement on the previous version but also prove to be effective and efficient in provoking a considerable number of diverse failure-revealing test scenarios for testing an ML-driven lane-keeping system. They can trigger several failures while promoting test scenario diversity, under a limited test time budget, high target failure severity, and strict speed limit constraints. 

Place, publisher, year, edition, pages
John Wiley and Sons Ltd, 2024
Keywords
advanced driver assistance systems, deep learning, evolutionary computation, lane-keeping system, machine learning testing, search-based testing, Automobile drivers, Biomimetics, Budget control, Deep neural networks, Embedded systems, Genetic algorithms, Learning systems, Particle swarm optimization (PSO), Software testing, Case-studies, Lane keeping, Machine-learning, Software Evolution, Software process, Test scenario
National Category
Software Engineering
Identifiers
urn:nbn:se:ri:diva-65687 (URN)10.1002/smr.2591 (DOI)2-s2.0-85163167144 (Scopus ID)
Note

 Correspondence Address: M.H. Moghadam; Smart Industrial Automation, RISE Research Institutes of Sweden, Västerås, Stora Gatan 36, 722 12, Sweden;  

This work has been funded by Vinnova through the ITEA3 European IVVES ( https://itea3.org/project/ivves.html ) and H2020‐ECSEL European AIDOaRT ( https://www.aidoart.eu/ ) and InSecTT ( https://www.insectt.eu/ ) projects. Furthermore, the project received partially financial support from the SMILE III project financed by Vinnova, FFI, Fordonsstrategisk forskning och innovation under the grant number: 2019‐05871.

Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2024-06-07Bibliographically approved
Helali Moghadam, M., Saadatmand, M., Borg, M., Bohlin, M. & Lisper, B. (2022). An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning. Software quality journal, 127-159
Open this publication in new window or tab >>An autonomous performance testing framework using self-adaptive fuzzy reinforcement learning
Show others...
2022 (English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, p. 127-159Article in journal (Refereed) Published
Abstract [en]

Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models. © 2021, The Author(s).

Place, publisher, year, edition, pages
Springer, 2022
Keywords
Autonomous testing, Performance testing, Reinforcement learning, Stress testing, Test case generation, Automation, Computer programming languages, Testing, Transfer learning, Automated generation, Optimal performance, Performance Model, Performance testing framework, Performance tests, Simulated performance, Software systems, Software testing
National Category
Computer Systems
Identifiers
urn:nbn:se:ri:diva-52628 (URN)10.1007/s11219-020-09532-z (DOI)2-s2.0-85102446552 (Scopus ID)
Note

Funding text 1: This work has been supported by and received funding partially from the TESTOMAT, XIVT, IVVES and MegaM@Rt2 European projects.

Available from: 2021-03-25 Created: 2021-03-25 Last updated: 2023-10-04Bibliographically approved
Andersson, T., Bohlin, M., Olsson, T. & Ahlskog, M. (2022). Comparison of Machine Learning’s- and Humans’- Ability to Consistently Classify Anomalies in Cylinder Locks. In: Kim, Duck Young; von Cieminski, Gregor; Romero, David (Ed.), APMS 2022: Advances in Production Management Systems. Smart Manufacturing and Logistics Systems: Turning Ideas into Action (Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT,volume 663)): . Paper presented at APMS 2022: Advances in Production Management Systems. Smart Manufacturing and Logistics Systems: Turning Ideas into Action (pp. 27-34). Springer Nature Switzerland
Open this publication in new window or tab >>Comparison of Machine Learning’s- and Humans’- Ability to Consistently Classify Anomalies in Cylinder Locks
2022 (English)In: APMS 2022: Advances in Production Management Systems. Smart Manufacturing and Logistics Systems: Turning Ideas into Action (Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT,volume 663)) / [ed] Kim, Duck Young; von Cieminski, Gregor; Romero, David, Springer Nature Switzerland , 2022, p. 27-34Conference paper, Published paper (Refereed)
Abstract [en]

Historically, cylinder locks’ quality has been tested manually by human operators after full assembly. The frequency and the characteristics of the testing procedure for these locks wear the operators’ wrists and lead to varying results of the quality control. The consistency in the quality control is an important factor for the expected lifetime of the locks which is why the industry seeks an automated solution. This study evaluates how consistently the operators can classify a collection of locks, using their tactile sense, compared to a more objective approach, using torque measurements and Machine Learning (ML). These locks were deliberately chosen because they are prone to get inconsistent classifications, which means that there is no ground truth of how to classify them. The ML algorithms were therefore evaluated with two different labeling approaches, one based on the results from the operators, using their tactile sense to classify into ‘working’ or ‘faulty’ locks, and a second approach by letting an unsupervised learner create two clusters of the data which were then labeled by an expert using visual inspection of the torque diagrams. The results show that an ML-solution, trained with the second approach, can classify mechanical anomalies, based on torque data, more consistently compared to operators, using their tactile sense. These findings are a crucial milestone for the further development of a fully automated test procedure that has the potential to increase the reliability of the quality control and remove an injury-prone task from the operators.

Place, publisher, year, edition, pages
Springer Nature Switzerland, 2022
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-63141 (URN)10.1007/978-3-031-16407-1_4 (DOI)2-s2.0-85140472723 (Scopus ID)
Conference
APMS 2022: Advances in Production Management Systems. Smart Manufacturing and Logistics Systems: Turning Ideas into Action
Available from: 2023-01-25 Created: 2023-01-25 Last updated: 2023-01-25Bibliographically approved
Helali Moghadam, M., Hamidi, G., Borg, M., Saadatmand, M., Bohlin, M., Lisper, B. & Potena, P. (2021). Performance Testing Using a Smart Reinforcement Learning-Driven Test Agent. In: 2021 IEEE Congress on Evolutionary Computation (CEC): . Paper presented at 2021 IEEE Congress on Evolutionary Computation (CEC) (pp. 2385-2394).
Open this publication in new window or tab >>Performance Testing Using a Smart Reinforcement Learning-Driven Test Agent
Show others...
2021 (English)In: 2021 IEEE Congress on Evolutionary Computation (CEC), 2021, p. 2385-2394Conference paper, Published paper (Refereed)
Abstract [en]

Performance testing with the aim of generating an efficient and effective workload to identify performance issues is challenging. Many of the automated approaches mainly rely on analyzing system models, source code, or extracting the usage pattern of the system during the execution. However, such information and artifacts are not always available. Moreover, all the transactions within a generated workload do not impact the performance of the system the same way, a finely tuned workload could accomplish the test objective in an efficient way. Model-free reinforcement learning is widely used for finding the optimal behavior to accomplish an objective in many decision-making problems without relying on a model of the system. This paper proposes that if the optimal policy (way) for generating test workload to meet a test objective can be learned by a test agent, then efficient test automation would be possible without relying on system models or source code. We present a self-adaptive reinforcement learning-driven load testing agent, RELOAD, that learns the optimal policy for test workload generation and generates an effective workload efficiently to meet the test objective. Once the agent learns the optimal policy, it can reuse the learned policy in subsequent testing activities. Our experiments show that the proposed intelligent load test agent can accomplish the test objective with lower test cost compared to common load testing procedures, and results in higher test efficiency.

Keywords
Analytical models, Automation, Transfer learning, Decision making, Reinforcement learning, Knowledge representation, Evolutionary computation, performance testing, load testing, workload generation, autonomous testing
National Category
Computer Systems
Identifiers
urn:nbn:se:ri:diva-55976 (URN)10.1109/CEC45853.2021.9504763 (DOI)
Conference
2021 IEEE Congress on Evolutionary Computation (CEC)
Available from: 2021-08-27 Created: 2021-08-27 Last updated: 2023-10-04Bibliographically approved
Helali Moghadam, M., Saadatmand, M., Borg, M., Bohlin, M. & Lisper, B. (2020). Poster: Performance Testing Driven by Reinforcement Learning. In: 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST): . Paper presented at 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST) (pp. 402-405).
Open this publication in new window or tab >>Poster: Performance Testing Driven by Reinforcement Learning
Show others...
2020 (English)In: 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), 2020, p. 402-405Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Performance testing remains a challenge, particularly for complex systems. Different application-, platform- and workload-based factors can influence the performance of software under test. Common approaches for generating platform- and workload-based test conditions are often based on system model or source code analysis, real usage modeling and use-case based design techniques. Nonetheless, creating a detailed performance model is often difficult, and also those artifacts might not be always available during the testing. On the other hand, test automation solutions such as automated test case generation can enable effort and cost reduction with the potential to improve the intended test criteria coverage. Furthermore, if the optimal way (policy) to generate test cases can be learnt by testing system, then the learnt policy can be reused in further testing situations such as testing variants, evolved versions of software, and different testing scenarios. This capability can lead to additional cost and computation time saving in the testing process. In this research, we present an autonomous performance testing framework which uses a model-free reinforcement learning augmented by fuzzy logic and self-adaptive strategies. It is able to learn the optimal policy to generate platform- and workload-based test conditions which result in meeting the intended testing objective without access to system model and source code. The use of fuzzy logic and self-adaptive strategy helps to tackle the issue of uncertainty and improve the accuracy and adaptivity of the proposed learning. Our evaluation experiments show that the proposed autonomous performance testing framework is able to generate the test conditions efficiently and in a way adaptive to varying testing situations.

Keywords
learning (artificial intelligence), program testing, source code (software), complex systems, workload-based factors, workload-based test conditions, system model, usage modeling, use-case based design techniques, test automation solutions, automated test case generation, intended test criteria coverage, testing system, testing situations, testing variants, testing process, autonomous performance testing framework, model-free reinforcement learning, intended testing objective, source code, Unified modeling language, Stress, Time factors, Sensitivity, Error analysis, Adaptation models, performance testing, stress testing, load testing, machine learning, reinforcement learning
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-51989 (URN)10.1109/ICST46399.2020.00048 (DOI)
Conference
2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST)
Available from: 2021-01-21 Created: 2021-01-21 Last updated: 2023-10-04Bibliographically approved
Tahvili, S., Hatvani, L., Felderer, M., Afzal, W. & Bohlin, M. (2019). Automated functional dependency detection between test cases using Doc2Vec and Clustering. In: Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019: . Paper presented at 1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019, 4 April 2019 through 9 April 2019 (pp. 19-26). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Automated functional dependency detection between test cases using Doc2Vec and Clustering
Show others...
2019 (English)In: Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 19-26Conference paper, Published paper (Refereed)
Abstract [en]

Knowing about dependencies and similarities between test cases is beneficial for prioritizing them for cost-effective test execution. This holds especially true for the time consuming, manual execution of integration test cases written in natural language. Test case dependencies are typically derived from requirements and design artifacts. However, such artifacts are not always available, and the derivation process can be very time-consuming. In this paper, we propose, apply and evaluate a novel approach that derives test cases' similarities and functional dependencies directly from the test specification documents written in natural language, without requiring any other data source. Our approach uses an implementation of Doc2Vec algorithm to detect text-semantic similarities between test cases and then groups them using two clustering algorithms HDBSCAN and FCM. The correlation between test case text-semantic similarities and their functional dependencies is evaluated in the context of an on-board train control system from Bombardier Transportation AB in Sweden. For this system, the dependencies between the test cases were previously derived and are compared to the results our approach. The results show that of the two evaluated clustering algorithms, HDBSCAN has better performance than FCM or a dummy classifier. The classification methods' results are of reasonable quality and especially useful from an industrial point of view. Finally, performing a random undersampling approach to correct the imbalanced data distribution results in an F1 Score of up to 75% when applying the HDBSCAN clustering algorithm.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Clustering Doc2Vec, FCM, HDBSCAN, Paragraph Vectors, Software Testing, Test Case Dependency, Artificial intelligence, Cost effectiveness, Semantics, Testing, Bombardier Transportation, Classification methods, Functional dependency, Random under samplings, Test case, Train control systems, Clustering algorithms
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-39269 (URN)10.1109/AITest.2019.00-13 (DOI)2-s2.0-85067096441 (Scopus ID)9781728104928 (ISBN)
Conference
1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019, 4 April 2019 through 9 April 2019
Note

Funding details: 20130085, 20160139; Funding details: VINNOVA, MegaM@RT2; Funding text 1: ECSEL & VINNOVA (through projects MegaM@RT2 & TESTOMAT) and the Swedish Knowledge Foundation (through the projects TOCSYC (20130085) and TestMine (20160139)) have supported this work.

Available from: 2019-07-03 Created: 2019-07-03 Last updated: 2020-01-29Bibliographically approved
Helali Moghadam, M., Saadatmand, M., Borg, M., Bohlin, M. & Lisper, B. (2019). Machine Learning to Guide Performance Testing: An Autonomous Test Framework. In: ICST Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems ITEQS'19, 2019: . Paper presented at ICST Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems ITEQS'19, 22 Apr 2019, Xi’an, China.
Open this publication in new window or tab >>Machine Learning to Guide Performance Testing: An Autonomous Test Framework
Show others...
2019 (English)In: ICST Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems ITEQS'19, 2019, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Satisfying performance requirements is of great importance for performance-critical software systems. Performance analysis to provide an estimation of performance indices and ascertain whether the requirements are met is essential for achieving this target. Model-based analysis as a common approach might provide useful information but inferring a precise performance model is challenging, especially for complex systems. Performance testing is considered as a dynamic approach for doing performance analysis. In this work-in-progress paper, we propose a self-adaptive learning-based test framework which learns how to apply stress testing as one aspect of performance testing on various software systems to find the performance breaking point. It learns the optimal policy of generating stress test cases for different types of software systems, then replays the learned policy to generate the test cases with less required effort. Our study indicates that the proposed learning-based framework could be applied to different types of software systems and guides towards autonomous performance testing.

Keywords
performance requirements, performance testing, test case generation, reinforcement learning, autonomous testing, Engineering and Technology, Teknik och teknologier, Computer Systems, Datorsystem
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-39327 (URN)10.1109/ICSTW.2019.00046 (DOI)2-s2.0-85068406208 (Scopus ID)
Conference
ICST Workshop on Testing Extra-Functional Properties and Quality Characteristics of Software Systems ITEQS'19, 22 Apr 2019, Xi’an, China
Available from: 2019-07-04 Created: 2019-07-04 Last updated: 2023-10-04Bibliographically approved
Tahvili, S., Pimentel, R., Afzal, W., Ahlberg, M., Fornander, E. & Bohlin, M. (2019). SOrTES: A Supportive Tool for Stochastic Scheduling of Manual Integration Test Cases. IEEE Access, 7, 12928-12946, Article ID 8616828.
Open this publication in new window or tab >>SOrTES: A Supportive Tool for Stochastic Scheduling of Manual Integration Test Cases
Show others...
2019 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 12928-12946, article id 8616828Article in journal (Refereed) Published
Abstract [en]

The main goal of software testing is to detect as many hidden bugs as possible in the final software product before release. Generally, a software product is tested by executing a set of test cases, which can be performed manually or automatically. The number of test cases which are required to test a software product depends on several parameters such as the product type, size, and complexity. Executing all test cases with no particular order can lead to waste of time and resources. Test optimization can provide a partial solution for saving time and resources which can lead to the final software product being released earlier. In this regard, test case selection, prioritization, and scheduling can be considered as possible solutions for test optimization. Most of the companies do not provide direct support for ranking test cases on their own servers. In this paper, we introduce, apply, and evaluate sOrTES as our decision support system for manual integration of test scheduling. sOrTES is a Python-based supportive tool which schedules manual integration test cases which are written in a natural language text. The feasibility of sOrTES is studied by an empirical evaluation which has been performed on a railway use-case at Bombardier Transportation, Sweden. The empirical evaluation indicates that around 40 % of testing failure can be avoided by using the proposed execution schedules by sOrTES, which leads to an increase in the requirements coverage of up to 9.6%. 

Keywords
decision support systems, dependency, integration testing, manual testing, scheduler algorithm, Software testing, stochastic test scheduling, test optimization, Artificial intelligence, Integration, Program debugging, Scheduling, Stochastic systems, Testing, Bombardier Transportation, Empirical evaluations, Natural language text, Stochastic scheduling, Test scheduling
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-37921 (URN)10.1109/ACCESS.2019.2893209 (DOI)2-s2.0-85061302207 (Scopus ID)
Note

Funding details: European Research Consortium for Informatics and Mathematics, ERCIM; Funding text 1: This work was supported in part by ECSEL & VINNOVA through projects XIVT, TESTOMAT, and MegaM@RT2, in part by the Swedish Knowledge Foundation through projects TOCSYC and TESTMINE, and in part by ERCIM ‘‘Alain Bensoussan’’ Fellowship Programme.

Available from: 2019-03-05 Created: 2019-03-05 Last updated: 2020-01-29Bibliographically approved
Ghaviha, N., Bohlin, M., Holmberg, C. & Dahlquist, E. (2019). Speed profile optimization of catenary-free electric trains with lithium-ion batteries. Journal of Modern Transportation, 27(3), 153-168
Open this publication in new window or tab >>Speed profile optimization of catenary-free electric trains with lithium-ion batteries
2019 (English)In: Journal of Modern Transportation, ISSN 2095-087X, E-ISSN 2196-0577, Vol. 27, no 3, p. 153-168Article in journal (Refereed) Published
Abstract [en]

Catenary-free operated electric trains, as one of the recent technologies in railway transportation, has opened a new field of research: speed profile optimization and energy optimal operation of catenary-free operated electric trains. A well-formulated solution for this problem should consider the characteristics of the energy storage device using validated models and methods. This paper discusses the consideration of the lithium-ion battery behavior in the problem of speed profile optimization of catenary-free operated electric trains. We combine the single mass point train model with an electrical battery model and apply a dynamic programming approach to minimize the charge taken from the battery during the catenary-free operation. The models and the method are validated and evaluated against experimental data gathered from the test runs of an actual battery-driven train tested in Essex, UK. The results show a significant potential in energy saving. Moreover, we show that the optimum speed profiles generated using our approach consume less charge from the battery compared to the previous approaches.

Place, publisher, year, edition, pages
Springer Berlin Heidelberg, 2019
Keywords
Catenary-free operation, Electric train, Energy efficiency, Speed profile optimization
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-39917 (URN)10.1007/s40534-018-0181-y (DOI)2-s2.0-85071607452 (Scopus ID)
Note

 Funding details: 2014-04319, 2012-01277; Funding details: VINNOVA; Funding text 1: This research was funded by VINNOVA (Sweden's Innovation Agency) Grant Numbers 2014-04319 and 2012-01277. Authors would like to thank Martin Joborn from Linköping University for his help, guidance and discussion on the train modeling.

Available from: 2019-09-27 Created: 2019-09-27 Last updated: 2020-01-29Bibliographically approved
Helali Moghadam, M., Saadatmand, M., Borg, M., Bohlin, M. & Lisper, B. (2018). Adaptive Runtime Response Time Control in PLC-based Real-Time Systems using Reinforcement Learning. In: : . Paper presented at 13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (pp. 217-223).
Open this publication in new window or tab >>Adaptive Runtime Response Time Control in PLC-based Real-Time Systems using Reinforcement Learning
Show others...
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Timing requirements such as constraints on response time are key characteristics of real-time systems and violations of these requirements might cause a total failure, particularly in hard real-time systems. Runtime monitoring of the system properties is of great importance to check the system status and mitigate such failures. Thus, a runtime control to preserve the system properties could improve the robustness of the system with respect to timing violations. Common control approaches may require a precise analytical model of the system which is difficult to be provided at design time. Reinforcement learning is a promising technique to provide adaptive model-free control when the environment is stochastic, and the control problem could be formulated as a Markov Decision Process. In this paper, we propose an adaptive runtime control using reinforcement learning for real-time programs based on Programmable Logic Controllers (PLCs), to meet the response time requirements. We demonstrate through multiple experiments that our approach could control the response time efficiently to satisfy the timing requirements.

Keywords
adaptive response time contro, lPLC-based real-time programs, reinforcement learning, runtime monitoring
National Category
Software Engineering
Identifiers
urn:nbn:se:ri:diva-34197 (URN)10.1145/3194133.3194153 (DOI)2-s2.0-85051555083 (Scopus ID)9781450357159 (ISBN)
Conference
13th International Symposium on Software Engineering for Adaptive and Self-Managing Systems
Available from: 2018-07-13 Created: 2018-07-13 Last updated: 2023-10-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1597-6738

Search in DiVA

Show all publications