Segmentation in lightsheet microscopy large volumes
Extracting and quantifying biologically relevant information from lightsheet microscopy images is challenging and requires the segmentation of varied cellular structures. It is a routine part of our processing pipeline for brain imaging: we have to segment vessels, neurons, microglia processes and more. Deep learning has emerged as the clear winner in these computer vision tasks, but 3D segmentation remains a challenge. We would like to optimize our segmentation pipeline and improve upon existing methods to build a scalable workflow for labeling and segmenting big images. This project involves the handling of big datasets in python with established tools, with space for individual freedom.
Similar work: CellSeg3D: self-supervised 3D cell segmentation for light-sheet microscopy
Neuron tracing for intracortical connectomics
We are about to acquire very cool datasets for cortical tracing of neurites in human tissue. We want to segment and label individual neurons across millimeters of tissue, while following their axons across the layers and regions. This is a subproject of the segmentation tasks described above, but with peculiar difficulties inherent to following an axon for a large distance.
Multi view deconvolution for lightsheet imaging
Our lightsheet microscope produces beautiful 3D volumes of human brain tissue with cellular resolution, allowing the study of cell interactions in entire cortical regions. We would like to push our imaging capabilities even further and image cell processes such as neurites and microglial morphologies, but these advancements will require computational improvements of our pipeline. In particular, the resolution of traditional microscopes is not isotropic but stretched along the imaging depth, hindering downstream processing tasks. The low axial resolution can be improved on by imaging the same sample from a rotated view. The resulting images need to be merged by joint deconvolution, which can become unfeasible for large volumes. We would like to apply the state of the art in double view imaging and further optimize existing algorithms.
The project will involve python image processing and lazy computation libraries such as napari and dask. A review on (multiview) deconvolution will be the starting point. We could aim for a possible C++ optimization of existing algorithms or for the optimization of a deep learning pipeline.
To know more: Rapid image deconvolution and multiview fusion for optical microscopy | Nature Biotechnology
Denoising and contrast equalization for deep imaging
Deep tissue lightsheet imaging suffers from aberrations induced by the imperfect transparency of cleared brains. This results in a gradient of image quality and SNR towards deep layers. We would like to explore and apply denoising algorithms to improve image quality and downstream segmentation tasks.
This project involves python image processing and deep learning libraries. A quick review of the state of the art in denoising for lightsheet imaging will be followed by their implementation in our existing pipeline.
Example work: Distortion Correction and Denoising of Light Sheet Fluorescence Images – PMC
Smart microscopy control software
Smart microscopy would greatly improve our acquisition routines, by autonomously choosing targets to zoom in. The speed of imaging and storage requirements of big lightsheet volumes would also be improved. The implementation of such a system on our ct-dSPIM would require low level communication with the stage, cameras and laser controllers. This project aims to implement and contribute to a new open source python library for smart microscopy control, navigate .