Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
DeepFRAP: Fast fluorescence recovery after photobleaching data analysis using deep neural networks
RISE Research Institutes of Sweden.
RISE Research Institutes of Sweden, Bioeconomy and Health, Agriculture and Food.ORCID iD: 0000-0002-3356-0894
RISE Research Institutes of Sweden, Bioeconomy and Health, Agriculture and Food. Chalmers University of Technology, Sweden.ORCID iD: 0000-0001-9979-5488
RISE Research Institutes of Sweden, Bioeconomy and Health, Agriculture and Food. Chalmers University of Technology, Sweden.ORCID iD: 0000-0002-5956-9934
2021 (English)In: Journal of Microscopy, ISSN 0022-2720, E-ISSN 1365-2818, Vol. 282, no 2, p. 146-161Article in journal (Refereed) Published
Abstract [en]

Conventional analysis of fluorescence recovery after photobleaching (FRAP) data for diffusion coefficient estimation typically involves fitting an analytical or numerical FRAP model to the recovery curve data using non-linear least squares. Depending on the model, this can be time consuming, especially for batch analysis of large numbers of data sets and if multiple initial guesses for the parameter vector are used to ensure convergence. In this work, we develop a completely new approach, DeepFRAP, utilizing machine learning for parameter estimation in FRAP. From a numerical FRAP model developed in previous work, we generate a very large set of simulated recovery curve data with realistic noise levels. The data are used for training different deep neural network regression models for prediction of several parameters, most importantly the diffusion coefficient. The neural networks are extremely fast and can estimate the parameters orders of magnitude faster than least squares. The performance of the neural network estimation framework is compared to conventional least squares estimation on simulated data, and found to be strikingly similar. Also, a simple experimental validation is performed, demonstrating excellent agreement between the two methods. We make the data and code used publicly available to facilitate further development of machine learning-based estimation in FRAP. Lay description: Fluorescence recovery after photobleaching (FRAP) is one of the most frequently used methods for microscopy-based diffusion measurements and broadly used in materials science, pharmaceutics, food science and cell biology. In a FRAP experiment, a laser is used to photobleach fluorescent particles in a region. By analysing the recovery of the fluorescence intensity due to the diffusion of still fluorescent particles, the diffusion coefficient and other parameters can be estimated. Typically, a confocal laser scanning microscope (CLSM) is used to image the time evolution of the recovery, and a model is fit using least squares to obtain parameter estimates. In this work, we introduce a new, fast and accurate method for analysis of data from FRAP. The new method is based on using artificial neural networks to predict parameter values, such as the diffusion coefficient, effectively circumventing classical least squares fitting. This leads to a dramatic speed-up, especially noticeable when analysing large numbers of FRAP data sets, while still producing results in excellent agreement with least squares. Further, the neural network estimates can be used as very good initial guesses for least squares estimation in order to make the least squares optimization convergence much faster than it otherwise would. This provides for obtaining, for example, diffusion coefficients as soon as possible, spending minimal time on data analysis. In this fashion, the proposed method facilitates efficient use of the experimentalist's time which is the main motivation to our approach. The concept is demonstrated on pure diffusion. However, the concept can easily be extended to the diffusion and binding case. The concept is likely to be useful in all application areas of FRAP, including diffusion in cells, gels and solutions. © 2020 The Authors. 

Place, publisher, year, edition, pages
Blackwell Publishing Ltd , 2021. Vol. 282, no 2, p. 146-161
Keywords [en]
confocal laser scanning microscopy, deep learning, deep neural network, diffusion, fluorescence recovery after photobleaching, machine learning, regression
National Category
Natural Sciences
Identifiers
URN: urn:nbn:se:ri:diva-52079DOI: 10.1111/jmi.12989Scopus ID: 2-s2.0-85099352468OAI: oai:DiVA.org:ri-52079DiVA, id: diva2:1522697
Note

Funding details: 2019‐01295; Funding details: Vetenskapsrådet, VR, 2016‐03809; Funding details: Stiftelsen för Strategisk Forskning, SSF; Funding text 1: The financial support of the Swedish Research Council for Sustainable Development (grant number 2019‐01295), the Swedish Research Council (grant number 2016‐03809), and the Swedish Foundation for Strategic Research (the project ‘Material structures seen through microscopes and statistics') is acknowledged. The computations were in part performed on resources at Chalmers Centre for Computational Science and Engineering (C3SE) provided by the Swedish National Infrastructure for Computing (SNIC). A GPU used for part of this research was donated by the NVIDIA Corporation.

Available from: 2021-01-26 Created: 2021-01-26 Last updated: 2023-10-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Krona, AnnikaLoren, NiklasRöding, Magnus

Search in DiVA

By author/editor
Krona, AnnikaLoren, NiklasRöding, Magnus
By organisation
RISE Research Institutes of SwedenAgriculture and Food
In the same journal
Journal of Microscopy
Natural Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 49 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf