Fuse My Cells: From Single View to Fused Multiview Lightsheet Imaging Challenge 2025¶
¶
Part of ISBI 2025
¶
🔎 About¶
France-BioImaging's Fuse My Cells challenge aims to advance new methods for 3D image-to-image fusion using deep learning in the fields of biology and microscopy.
In multi-view microscopy, to create a fused 3D image stack, it is necessary to capture several views (typically from 2 to 4) of the sample from different angles and then align them in the same spatial reference frame to generate a 3D fusion of these images. The fused image compensates for point spread function anisotropy and signal degradation for acquisitions deep in the sample, at the expense of an increased photon exposition, damaging to the sample.
The main objective of the Fuse My Cells challenge is to predict a fused 3D image using only one 3D view, providing a practical solution to the limitations of current microscopy techniques, such as improving image quality, extending the duration of live imaging, saving on the photon budget, and facilitating image analysis.
¶
🎯 Task¶
💡 Context¶
¶
🔬 Optical (photonic) microscopy is commonly used to image live organisms. However, its invasive nature can cause light-induced damage, such as photobleaching, where fluorophores lose their ability to emit light, and phototoxicity, which harms living cells. These problems are accentuated when imaging biological systems over long periods, as prolonged exposure can compromise both data accuracy and quality.
🔬 Confocal microscopy is widely used for high-resolution 3D imaging. Laser scanning confocal, spinning disk or Airy disk microscope allow for efficient z-sectioning (the ability to suppress fluorescence signal from outside the focal plane) and near diffraction limited resolution. Yet laser scanning excitation exposes the sample to a high photon flux. This makes long-term imaging of living cells a challenge for confocal microscopy, as the cumulative light exposure can negatively affect cell health and reduce image quality in prolonged experiments.
🔬 Lightsheet microscopy, such as Single Plane Illumination Microscopy (SPIM), is an alternative 3D microscopy technique that can overcome some of confocal microscopy's issues. In these microscopes, the excitation optical path is perpendicular with the detection axis, and aligned with the detection focal plane. Instead of a point detector, a full camera chip is used. This way only the plane of interest is illuminated at each acquisition. This reduces the total light exposure on the sample, saving on the photon budget. Furthermore, some SPIM setup allow imaging of samples usually bigger that classical inverted microscope statiffs. In particular the sample is usually hung in front of the detection objective, allowing it to rotate freely around the vertical axis.
However, despite these advantages, lightsheet microscopy still faces challenges, especially when imaging deep within a sample (beyond 100 µm). Similar to confocal microscopy, lightsheet techniques suffer from a degradation in their point spread function (PSF) —the microscope’s response to a point source—, meaning the resolution is higher laterally (in the camera focal plane) than axially (along the detection optical axis). As the imaging depth increases, the PSF degrades, and the signal-to-background ratio decreases, leading to a loss in image quality. Thus, larger samples, such as embryos, can pose challenges for uniform illumination and resolution across the entire volume, while optical aberrations, like astigmatism, may affect the clarity of the images, especially when attempting to capture fine 3D details.
🔬 A solution to these limitations is multiview lightsheet microscopy associated with multiview fusion imaging. This approach improves 3D resolution by combining multiple views of the sample from different acquired angles, resulting in higher quality images with less distortion and shadowing, and facilitating subsequent analyses such as segmentation, tracking and quantification of biological structures.
However, multiview lightsheet microscopy introduces additional complexities: sample preparation is more complex and the photon budget is distributed across all angles, all channels and all over time: all of these can also lead to photobleaching and phototoxicity. Then, imaging living cells becomes a challenge, because, the more time goes by, the more photons the sample collects, and the less photon budget remains for following time periods, all divided for each view and for each channel.
¶
✨ Our solution: Fuse My Cells Challenge¶
¶
The Fuse My Cells France-Bioimaging Challenge proposes a solution : to construct a comprehensive database that empowers competitors to develop methods for predicting fused multiview 3D images from only a single view.
These methods will not only retain all the advantages of multi-view light sheet microscopy, but also minimize the need for additional views and save photon budgets. This will open up new possibilities in imaging experiments, such as adding new fluorescent channels, extending the duration of live imaging, or exploring more complex studies with reduced chance of photobleaching or phototoxicity.
After a first challenge – Light My Cells presented at ISBI 2024 in Athens –, the second France BioImaging challenge, Fuse My Cells, ultimate goal is to produce robust and open-source methods capable of handling the large variability in biological imaging data. We build the extensive database thanks to the structuration role of the France-Bioimaging national infrastructure which federates 23 imaging sites across France.
¶
📰 State of the Art¶
¶
Recent advancements in fluorescence microscopy have expanded the capabilities in observing biological processes in 3D. Yet challenges remain, the images captured by modern cameras are still degraded by noise, which leads to deteriorated visual image quality. To enhance these bioimages' quality, two classical methods cover the restoration of 3D images: denoising and deconvolution.
Denoising refers to the process of removing potential noise from an image in order to enhance its quality by eliminating unwanted artifacts and improving clarity, while also preserving details.
Traditional methods use well-defined mathematical algorithms to remove noise ; such as Block-Matching and 3-D filtering (BM3D) method introduced by Dabov et al. (2007) which groups similar patches, denoises them in a transformed space (using wavelet and Wiener filtering), and combines them through weighted averaging to produce a clean image, with some parameters varying based on noise levels. Meiniel et al. (2018) reviewed this method as the remaining most used after reviewing a selection of 11 other state-of-the-art methods before introducing a new sparsity-based algorithm.
With the increasing efficiency of Deep Learning methods for microscopy and biology fields, Weigert et al. (2018) introduced content-aware image restoration (CARE), a deep learning-based method that enhances resolution and image quality. CARE can perform denoising by providing noisy input images paired with high signal-to-noise ratio images as targets, and then extend the range of observable phenomena with near-isotropic resolution and higher frame rates using fewer photons—a clear advantage for long-term imaging applications.
In order to denoise images without requiring noiseless reference images, Krull et al. (2019) introduced Noise2Void (N2V), a self-supervised deep learning method. This method can be performed by "masking" a random subset of pixels in the noisy image and training the network to predict their values. N2V is still efficient when obtaining clean or paired datasets is challenging.
Like CARE, N2V can be applied for 2D and 3D images situations where an image needs to be improved. But for both, in the case of 3D images, they only support enhanced visualization for a single acquisition direction, without gains along the optical (z) axis where the system's PSF is widest.
Then, thanks to multiview image fusion, the Fuse My Cells challenge aims to go one step further by enabling the restoration of 3D images over all the various 3D perspectives, including orthogonal rotations of the views of these restored images; to improve the imaging of more complex biological structures.
Deconvolution, unlike denoising, refers to the process of enhancing image clarity and resolution by reducing blur caused by the extended point spread function, particularly in microscope images. By mathematically reversing the “blurring effect” (modeled as a convolution of the fluorophore distribution by the PSF), deconvolution sharpens the final image, making it more detailed. This method benefits most fluorescence images, including those taken with advanced techniques like confocal and lightsheet microscopy.
While Temerinac-Ott et al.(2011) presented a spatially-variant deconvolution model using the Richardson-Lucy algorithm (Richardson (1972) ; Lucy (1974)) – an iterative procedure for recovering an underlying image that has been blurred by a known PSF – for SPIM images, Preibisch et al.(2013) introduced a Bayesian approach to multi-view deconvolution for light sheet microscopy, which significantly improves processing speed by taking advantage of graphics hardware, making it ideal for large datasets.
Both Toader et al. (2022) and Prigent et al.(2023) explored advanced algorithms for improving image reconstruction and denoising in fluorescence microscopy through primal-dual optimization techniques. While Toader et al. presented a variational model addressing spatially varying blur and mixed noise in light-sheet microscopy, Prigent et al. introduced SPITFIR(e), a flexible method designed to restore 3D images and videos by adapting to various signal degradation sources and microscopy techniques.
Recently, various deep-learning methods appeared using mostly generative adversarial networks (GANs) to to enable cross-modality super-resolution in fluorescence microscopy (Wang et al., 2019), to learn (unsupervised) deconvolution with using known physics of the imaging system (Wijesinghe et al., 2022), or to combine Airy beam light sheet microscopy and deep learning deconvolution (Stockhausen et al., 2023).
However, these methods have limitations for the use of multiview lightsheet microscopes, or for applying or training deconvolution of 3D images over all the various 3D perspectives. While we’ve seen temporal and spatial restoration methods, a specific imaging technique have been directly developed to enhance resolution and clarity: the IsoView light-sheet microscopy, introduced by Chhetri et al. (2015), which enables whole-animal imaging with near-isotropic spatial resolution through multiview deconvolution across four orthogonal views. By learning self-similarities along different views of a sample the Fuse My Cells challenge will explore new methods to increase spatial resolution along the optical axis of single view images, as well as image quality deep inside samples, complementing the above tool set.
Databases already available, Multi-view light-sheet imaging and tracking with the MaMuT software reveals the cell lineage of a direct developing arthropod limb (Wolff et al. , 2018).
Yet to the best of our knowledge, the open science database, algorithms and methods desired for the Fuse My Cells challenge have no equivalent, aspire to overcome the current limitations of the state of the art, and seek to empower biologists with an accessible, adaptable and high-quality image deep learning fusion process.
¶
🚀 Prospectives¶
¶
If successful, the Fuse My Cells challenge can lead to robust methods for image deep learning fusion processing. Then, it will be possible to use deep learning methods such as domain adaptation or transfer learning to apply these techniques to other microscope setups or modalities without multiview capabilities. This would result in better imaging quality and 3D resolution which will be able to improve segmentation, tracking, fewer artifacts and expanding the potential for biological imaging across different platforms and experimental setups.