Change search
Link to record
Permanent link

Direct link
Publications (10 of 13) Show all publications
Martinsson, J. & Sandsten, M. (2024). DMEL: THE DIFFERENTIABLE LOG-MEL SPECTROGRAM AS A TRAINABLE LAYER IN NEURAL NETWORKS. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings: . Paper presented at 49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024. Seoul, South Korea. 14 April 2024 through 19 April 2024 (pp. 5005-5009). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>DMEL: THE DIFFERENTIABLE LOG-MEL SPECTROGRAM AS A TRAINABLE LAYER IN NEURAL NETWORKS
2024 (English)In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2024, p. 5005-5009Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present the differentiable log-Mel spectrogram (DMEL) for audio classification. DMEL uses a Gaussian window, with a window length that can be jointly optimized with the neural network. DMEL is used as the input layer in different neural networks and evaluated on standard audio datasets. We show that DMEL achieves a higher average test accuracy for sub-optimal initial choices of the window length when compared to a baseline with a fixed window length. In addition, we analyse the computational cost of DMEL and compare to a standard hyperparameter search over different window lengths, showing favorable results for DMEL. Finally, an empirical evaluation on a carefully designed dataset is performed to investigate if the differentiable spectrogram actually learns the optimal window length. The design of the dataset relies on the theory of spectrogram resolution. We also empirically evaluate the convergence rate to the optimal window length. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2024
National Category
Mathematics
Identifiers
urn:nbn:se:ri:diva-74873 (URN)10.1109/ICASSP48485.2024.10446816 (DOI)2-s2.0-85195408870 (Scopus ID)
Conference
49th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024. Seoul, South Korea. 14 April 2024 through 19 April 2024
Note

Thanks to the Swedish Foundation for Strategic Research for funding.

Available from: 2024-09-04 Created: 2024-09-04 Last updated: 2024-09-06Bibliographically approved
Martinsson, J., Runefors, M., Frantzich, H., Glebe, D., McNamee, M. & Mogren, O. (2022). A Novel Method for Smart Fire Detection Using Acoustic Measurements and Machine Learning: Proof of Concept. Fire technology, 58, 3385
Open this publication in new window or tab >>A Novel Method for Smart Fire Detection Using Acoustic Measurements and Machine Learning: Proof of Concept
Show others...
2022 (English)In: Fire technology, ISSN 0015-2684, E-ISSN 1572-8099, Vol. 58, p. 3385-Article in journal (Refereed) Published
Abstract [en]

Fires are a major hazard resulting in high monetary costs, personal suffering, and irreplaceable losses. The consequences of a fire can be mitigated by early detection systems which increase the potential for successful intervention. The number of false alarms in current systems can for some applications be very high, but could be reduced by increasing the reliability of the detection system by using complementary signals from multiple sensors. The current study investigates the novel use of machine learning for fire event detection based on acoustic sensor measurements. Many materials exposed to heat give rise to acoustic emissions during heating, pyrolysis and burning phases. Further, sound is generated by the heat flow associated with the flame itself. The acoustic data collected in this study is used to define an acoustic sound event detection task, and the proposed machine learning method is trained to detect the presence of a fire event based on the emitted acoustic signal. The method is able to detect the presence of fire events from the examined material types with an overall F-score of 98.4%. The method has been developed using laboratory scale tests as a proof of concept and needs further development using realistic scenarios in the future. © 2022, The Author(s).

Place, publisher, year, edition, pages
Springer, 2022
Keywords
Acoustic emissions, Artificial intelligence, Deep neural networks, Fire detection, Machine learning, Sound, Acoustic emission testing, Acoustic variables measurement, Fire detectors, Fires, Learning systems, Acoustic measurements, Acoustic-emissions, Early detection system, Fire event, Machine-learning, Major hazards, Monetary costs, Novel methods, Proof of concept
National Category
Physical Sciences
Identifiers
urn:nbn:se:ri:diva-60272 (URN)10.1007/s10694-022-01307-1 (DOI)2-s2.0-85137843831 (Scopus ID)
Note

 Funding details: 2019-00954; Funding details: Svenska Forskningsrådet Formas; Funding text 1: The work presented in this article was funded by FORMAS, the Swedish Research Council for Sustainable Development (Contract Number: 2019-00954).

Available from: 2022-10-10 Created: 2022-10-10 Last updated: 2023-07-03Bibliographically approved
Glebe, D., Johansson, T., Martinsson, J. & Genell, A. (2022). Bullerdatainsamling och autonom artidentifiering för att underlätta miljöövervakning: En syntes. Stockholm: Naturvårdsverket
Open this publication in new window or tab >>Bullerdatainsamling och autonom artidentifiering för att underlätta miljöövervakning: En syntes
2022 (Swedish)Report (Other academic)
Abstract [sv]

Trenden för Sveriges miljömål är i flera fall nedåtgående, och ett av de områdensom visar negativ trend är målet ”Ett rikt växt- och djurliv”. Sveriges uppföljning avhabitat- och fågeldirektivet påvisar ett utsatt läge för den biologiska mångfalden.Det är idag svårt och kostsamt att övervaka om de svenska miljömålen uppfylls.Insamling av ljud- och bilddata sker redan idag i en stor omfattning, men detfinns en stor och outnyttjad potential att förenkla miljöövervakningen genom nyabilliga datainsamlingsenheter och framför allt genom nya automatiska AI-baseradeanalyssystem. Manuell provtagning och datainsamling är tidskrävande och kostsam,vilket gör autonom ljud- och bilddatainsamling attraktivt, särskilt på otillgängligaplatser som till exempel under vatten. I flera fall skulle allmänheten kunna anlitasför att hjälpa till med detektering av invasiva främmande arter med hjälp av enmobiltelefonapplikation som använder artificiell intelligens för artidentifiering.I denna rapport granskas state of the art, det vill säga resultat och tekniki forskningsfronten, när det gäller användning av ljud- och bilddata för bullerövervakningoch bullerkartläggning, för artidentifiering av djur och växter ochför övervakning av invasiva främmande arter. Rapporten granskar och analyserarnuvarande och framväxande teknik och metoder och bedömer deras mognad,tillgänglighet och tillförlitlighet. Denna rapport syftar till att redovisa möjligheter och utmaningar kopplade till: System för att samla in bullerdata för att modellera bullerpåverkan i de terrestraoch akvatiska domänerna och möjliggöra bullerkartläggning. System för autonom ljud- och bildbaserad artidentifiering, uppskattning avpopulation och övervakning av biologisk mångfald. System för detektion av invasiva främmande arter, till exempel medmobiltelefonapp. Ekoakustisk bullerdatainsamling Biologiska tillämpningar inom akustik kallas bioakustik eller ekoakustik. De mestutforskade områdena med passiv akustisk övervakning eller PAM, är djurläteni ultraljudsområdet (exempel är fladder-möss och valar), eftersom traditionellaanalysmetoder kan användas i de fallen. PAM kan till exempel realiseras med ljudboxar,enskilda eller i matriser, eller med halsband eller inopererade enheter påenskilda djurindivider. Akustiskt aktiva djur inrättar sig efter varandra och övrigaljud, varför ljudlandskap anses ge information om biosystemets hälsa.Det viktigt att adressera hela mätkedjan i PAM-teknik. Rapportens översikt avhårdvara fokuseras mot ljud, eftersom kamerateknik är etablerat i svensk viltvård.De flesta hårdvarukomponenter i mätkedjan behöver väljas från praktisk synvinkelför att fungera bra i mätsystem, till exempel batteritid, minneshantering elleruppkopplingsmöjligheter,och låg kostnad om många insamlingsenheter behövs. Rapporten innehåller även en översikt av autonoma system eller integrerade enhetersom täcker hela mätkedjan vid insamling av ljuddata. Möjligheterna att kombineraljud- och bilddata för analys används sällan idag, och det kan finnas stora vinsteratt göra inom detta fält.Framgångsrika AI-baserade analysmetoder har inte slagit igenom i kommersiellamjukvaruprogram. Det är ett stort tryck inom forskarvärlden på att tillgängliggöraanalysresultat, analysverktyg och insamlade data, för att kunna återanvända dataoch resultat av resursskäl, och erbjuda större datatäckning. Även standardiseradeformat på metadata efterfrågas, syftande mot internationell forskningspraxis.Framgångarna inom medborgarforskning kan delvis tillskrivas nya verktygimplementerade i mobilappar, men det finns en stor utvecklingspotential attskräddarsy verktyg och metoder efter verksamheter och faktiska behov. AI inom bioakustik –State of the art De AI-baserade metoder som används mest inom bioakustik är djupinlärningsmetoder,framför allt olika former av neurala nätverk som lämpar sig för resurskrävandelyssning och bildgranskande. Det största området är fågelklassificering.Den vanligaste typen av neuralt nätverk inom bioakustik är CNN (convolutionalneural network), vilken är viktig inom bild- och ljudanalys, men nya varianterutvecklas ständigt. Ofta används spektrogram (en bildrepresentation av ljud) somindata till djupinlärningsmodeller, men många varianter har visats fungera. Melspektrogramär det som funkat bäst i bioakustiska sammanhang, på grund av attfrekvensskalningen avbildar läten likadant oavsett tonhöjd, vilket passar nätverkav CNN-typ. Observera att samma rådata kan ligga till grund för olika typer avträningsindata,om de har tillräckligt bra kvalitet och upplösning. Kraven på högupplöstdata väntas öka, vilket är viktigt om insamlade rådata ska kunna användasi framtiden, liksom kraven att metadata ska kopplas till konstaterade observationer.För maskinlyssning har en femtonfaldig ökning skett av antalet publikationeri området mellan 1998–2018. Skattning av populationstäthet hos fåglar med hjälpav maskinlyssning har visat sig ge lika bra resultat som manuell punkträkning,med avseende på nyckelparametrar som antal noterade fågelarter.Metoder som inte kräver manuell klassificering av upplärningsdata är en lovandeframåt för bullerreducering och källseparering. Vid brist på annoterade indata förträning av modeller kan ”embedding functions” användas. Från området stadsbullerfinns metoder för att realtidsströmma data med hjälp av distribuerade nät. Aktiva inlärningsmetoder, det vill säga metoder där experter aktivt deltar iinlärningsprocessen,ger snabbt kraftfulla resultat. En intressant variant på detta äratt träna djur för att göra val som blir annoteringar av indata. Detta ger en modell somrepresenterar djurens egen perception, vilket dock får användas med försiktighet.Sammanfattningsvis ses inte en omfattande satsning på bullerdatainsamlingutanför Sveriges bebyggda områden eller runt Sveriges kustersom realistisk i dagensläge, men det finns stora möjligheter att integrera autonom artidentifiering i depågående övervakningsaktiviteter som bedrivs i svensk natur eller som ett proof-of concept.

Place, publisher, year, edition, pages
Stockholm: Naturvårdsverket, 2022. p. 89
Series
Naturvårdsverket rapport 7086
Keywords
Environmental Sciences, Miljövetenskap
National Category
Environmental Sciences
Identifiers
urn:nbn:se:ri:diva-62462 (URN)978-91-620-7086-1 (ISBN)
Available from: 2023-01-24 Created: 2023-01-24 Last updated: 2023-05-22Bibliographically approved
Martinsson, J., Willbo, M., Pirinen, A., Mogren, O. & Sandsten, M. (2022). Few-shot bioacoustic event detection using a prototypical network ensemble with adaptive embedding functions. In: : . Paper presented at Detection and Classification of Acoustic Scenes and Events 2022, DCASE 2022.
Open this publication in new window or tab >>Few-shot bioacoustic event detection using a prototypical network ensemble with adaptive embedding functions
Show others...
2022 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this report we present our method for the DCASE 2022 challenge on few-shot bioacoustic event detection. We use an ensemble of prototypical neural networks with adaptive embedding functions and show that both ensemble and adaptive embedding functions can be used to improve results from an average F-score of 41.3% to an average F-score of 60.0% on the validation dataset.

Keywords
Machine listening, bioacoustics, few-shot learning, ensemble
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:ri:diva-62530 (URN)
Conference
Detection and Classification of Acoustic Scenes and Events 2022, DCASE 2022
Available from: 2023-01-13 Created: 2023-01-13 Last updated: 2024-07-28Bibliographically approved
Martinsson, J., Willbo, M., Pirinen, A., Mogren, O. & Sandsten, M. (2022). FEW-SHOT BIOACOUSTIC EVENT DETECTION USING AN EVENT-LENGTH ADAPTED ENSEMBLE OF PROTOTYPICAL NETWORKS. In: : . Paper presented at Detection and Classification of Acoustic Scenes and Events 2022. 3–4 November 2022, Nancy, France.
Open this publication in new window or tab >>FEW-SHOT BIOACOUSTIC EVENT DETECTION USING AN EVENT-LENGTH ADAPTED ENSEMBLE OF PROTOTYPICAL NETWORKS
Show others...
2022 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we study two major challenges in few-shot bioacoustic event detection: variable event lengths and false-positives. We use prototypical networks where the embedding function is trained using a multi-label sound event detection model instead of using episodic training as the proxy task on the provided training dataset. This is motivated by polyphonic sound events being present in the base training data. We propose a method to choose the embedding function based on the average event length of the few-shot examples and show that this makes the method more robust towards variable event lengths. Further, we show that an ensemble of prototypical neural networks trained on different training and validation splits of time-frequency images with different loudness normalizations reduces false-positives. In addition, we present an analysis on the effect that the studied loudness normalization techniques have on the performance of the prototypical network ensemble. Overall, per-channel energy normalization (PCEN) outperforms the standard log transform for this task. The method uses no data augmentation and no external data. The proposed approach achieves a F-score of 48.0% when evaluated on the hidden test set of the Detection and Classification of Acoustic Scenes and Events (DCASE) task 5

Keywords
Machine listening, bioacoustics, few-shot learning, ensemble
National Category
Computer Sciences
Identifiers
urn:nbn:se:ri:diva-62540 (URN)
Conference
Detection and Classification of Acoustic Scenes and Events 2022. 3–4 November 2022, Nancy, France
Available from: 2023-01-16 Created: 2023-01-16 Last updated: 2024-07-28Bibliographically approved
Martinsson, J., Zec, E., Gillblad, D. & Mogren, O. (2021). Adversarial representation learning for synthetic replacement of private attributes. In: Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021: . Paper presented at 2021 IEEE International Conference on Big Data, Big Data 2021, 15 December 2021 through 18 December 2021 (pp. 1291-1299). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Adversarial representation learning for synthetic replacement of private attributes
2021 (English)In: Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021, Institute of Electrical and Electronics Engineers Inc. , 2021, p. 1291-1299Conference paper, Published paper (Refereed)
Abstract [en]

Data privacy is an increasingly important aspect of many real-world analytics tasks. Data sources that contain sensitive information may have immense potential which could be unlocked using the right privacy enhancing transformations, but current methods often fail to produce convincing output. Furthermore, finding the right balance between privacy and utility is often a tricky trade-off. In this work, we propose a novel approach for data privatization, which involves two steps: in the first step, it removes the sensitive information, and in the second step, it replaces this information with an independent random sample. Our method builds on adversarial representation learning which ensures strong privacy by training the model to fool an increasingly strong adversary. While previous methods only aim at obfuscating the sensitive information, we find that adding new random information in its place strengthens the provided privacy and provides better utility at any given level of privacy. The result is an approach that can provide stronger privatization on image data, and yet be preserving both the domain and the utility of the inputs, entirely independent of the downstream task. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2021
Keywords
Deep Learning, Generative Adversarial Privacy, Machine Learning, Privacy, Computer vision, Data privacy, Economic and social effects, Privatization, 'current, Data-source, Machine-learning, Real-world, Sensitive informations, Synthetic replacement, Trade off
National Category
Computer Sciences
Identifiers
urn:nbn:se:ri:diva-58910 (URN)10.1109/BigData52589.2021.9671802 (DOI)2-s2.0-85125306014 (Scopus ID)9781665439022 (ISBN)
Conference
2021 IEEE International Conference on Big Data, Big Data 2021, 15 December 2021 through 18 December 2021
Available from: 2022-03-30 Created: 2022-03-30 Last updated: 2024-05-21Bibliographically approved
Ericsson, D., Östberg, A., Zec, E. L., Martinsson, J. & Mogren, O. (2020). Adversarial representation learning for private speech generation. In: : . Paper presented at 37 th International Conference on Machine Learning, Vienna, Austria..
Open this publication in new window or tab >>Adversarial representation learning for private speech generation
Show others...
2020 (English)Conference paper, Published paper (Refereed)
Abstract [en]

As more data is collected in various settingsacross organizations, companies, and countries,there has been an increase in the demand of userprivacy. Developing privacy preserving methodsfor data analytics is thus an important area of research. In this work we present a model basedon generative adversarial networks (GANs) thatlearns to obfuscate specific sensitive attributes inspeech data. We train a model that learns to hidesensitive information in the data, while preservingthe meaning in the utterance. The model is trainedin two steps: first to filter sensitive informationin the spectrogram domain, and then to generatenew and private information independent of thefiltered one. The model is based on a U-Net CNNthat takes mel-spectrograms as input. A MelGANis used to invert the spectrograms back to rawaudio waveforms. We show that it is possible tohide sensitive information such as gender by generating new data, trained adversarially to maintainutility and realism.

National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-51873 (URN)
Conference
37 th International Conference on Machine Learning, Vienna, Austria.
Available from: 2021-01-18 Created: 2021-01-18 Last updated: 2024-05-21Bibliographically approved
Martinsson, J., Schliep, A., Eliasson, B. & Mogren, O. (2020). Blood Glucose Prediction with Variance Estimation Using Recurrent Neural Networks. Journal of Healthcare Informatics Research, 4(1)
Open this publication in new window or tab >>Blood Glucose Prediction with Variance Estimation Using Recurrent Neural Networks
2020 (English)In: Journal of Healthcare Informatics Research, ISSN 2509-4971, E-ISSN 2509-498X, Vol. 4, no 1Article in journal (Refereed) Published
Abstract [en]

Many factors affect blood glucose levels in type 1 diabetics, several of which vary largely both in magnitude and delay of the effect. Modern rapid-acting insulins generally have a peak time after 60–90 min, while carbohydrate intake can affect blood glucose levels more rapidly for high glycemic index foods, or slower for other carbohydrate sources. It is important to have good estimates of the development of glucose levels in the near future both for diabetic patients managing their insulin distribution manually, as well as for closed-loop systems making decisions about the distribution. Modern continuous glucose monitoring systems provide excellent sources of data to train machine learning models to predict future glucose levels. In this paper, we present an approach for predicting blood glucose levels for diabetics up to 1 h into the future. The approach is based on recurrent neural networks trained in an end-to-end fashion, requiring nothing but the glucose level history for the patient. Our approach obtains results that are comparable to the state of the art on the Ohio T1DM dataset for blood glucose level prediction. In addition to predicting the future glucose value, our model provides an estimate of its certainty, helping users to interpret the predicted levels. This is realized by training the recurrent neural network to parameterize a univariate Gaussian distribution over the output. The approach needs no feature engineering or data preprocessing and is computationally inexpensive. We evaluate our method using the standard root-mean-squared error (RMSE) metric, along with a blood glucose-specific metric called the surveillance error grid (SEG). We further study the properties of the distribution that is learned by the model, using experiments that determine the nature of the certainty estimate that the model is able to capture.

Place, publisher, year, edition, pages
Springer, 2020
Keywords
Blood glucose prediction, Recurrent neural networks, Type 1 diabetes
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-45632 (URN)10.1007/s41666-019-00059-y (DOI)2-s2.0-85087825107 (Scopus ID)
Available from: 2020-08-13 Created: 2020-08-13 Last updated: 2023-06-02Bibliographically approved
Soltanipour, N., Rahrovani, S., Martinsson, J. & Westlund, R. (2020). Chassis Hardware Fault Diagnostics with Hidden Markov Model Based Clustering. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC): . Paper presented at 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).
Open this publication in new window or tab >>Chassis Hardware Fault Diagnostics with Hidden Markov Model Based Clustering
2020 (English)In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020Conference paper, Published paper (Refereed)
Abstract [en]

Predictive maintenance is a key component regarding cost reduction in automotive industry and is of great importance. It can improve both feeling of comfort and safety, by means of early detection, isolation and prediction of prospective failures. That is why automotive industry and fleet managers are turning to predictive analytic to maintain a lead position in industry. A patent application has been recently submitted, proposing a two stage solution, including a real-time solution (onboard diagnostic system) and an offline solution (in the back-end), for health monitoring/assessment of different chassis components. Hardware faults are detected based on changes of the fundamental eigen-frequencies of the vehicle where time series of interest, from in-car sensory system, are collected/reported for advanced data analytic in the backend. The main focus of this paper in on the latter solution, using an unsupervised machine learning approach. A clustering approach based on Mixture of Hidden Markov Models, is adopted to conduct automatic diagnosis and isolation of faults. Detection and isolation of tyre and wheel bearing faults has been considered for this study but same framework can be used to handle other components faults, such as suspension system faults. In order to validate the performance of the proposed approach tests were performed at Hallared test track in Gothenburg, and data were collected for two faulty states (for faulty wheel bearing and low-tyre pressure) and no-fault state.

Keywords
Hidden Markov models, Wheels, Automobiles, Hardware, Time series analysis, Real-time systems, Patents
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-51878 (URN)10.1109/ITSC45102.2020.9294468 (DOI)
Conference
2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)
Available from: 2021-01-18 Created: 2021-01-18 Last updated: 2023-05-22Bibliographically approved
Korneliusson, M., Martinsson, J. & Mogren, O. (2019). Generative Modelling of Semantic Segmentation Data in the Fashion Domain. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW): . Paper presented at 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) (pp. 3169-3172).
Open this publication in new window or tab >>Generative Modelling of Semantic Segmentation Data in the Fashion Domain
2019 (English)In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 2019, p. 3169-3172Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we propose a method to generatively model the joint distribution of images and corresponding semantic segmentation masks using generative adversarial networks. We extend the Style-GAN architecture by iteratively growing the network during training, to add new output channels that model the semantic segmentation masks. We train the proposed method on a large dataset of fashion images and our experimental evaluation shows that the model produces samples that are coherent and plausible with semantic segmentation masks that closely match the semantics in the image.

Keywords
image segmentation, generative modelling, semantic segmentation data, fashion domain, corresponding semantic segmentation masks, generative adversarial networks, Style-GAN architecture, fashion images, semantics, Generators, Training, Gallium nitride, Computer vision, deep learning, artificial neural networks, semantic segmentations, clothing parsing
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-51880 (URN)10.1109/ICCVW.2019.00391 (DOI)
Conference
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
Available from: 2021-01-18 Created: 2021-01-18 Last updated: 2023-06-02Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-5032-4367

Search in DiVA

Show all publications