When Deep Learning Meets Cell Image Synthesis

Investor logo

Warning

This publication doesn't include Faculty of Medicine. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

KOZUBEK Michal

Year of publication 2020
Type Article in Periodical (without peer review)
MU Faculty or unit

Faculty of Informatics

Citation
Description Deep learning methods developed by the computer vision community are successfully being adapted for use in biomedical image analysis and synthesis applications with some delay. Also in cell image synthesis, we can observe significant improvements in the quality of generated results brought about by deep learning. The typical task is to generate isolated cell images based on training image examples with cropped, centered, and aligned individual cells. While the first trials to use generative adversarial networks (GANs) without any object detection or segmentation had limited capabilities, the recent article by Scalbert et al. 1 has shown that significant improvement can be obtained by splitting the task into (1) learning and generating object (cell and/or nuclei) shapes based on image segmentation, and (2) learning and generating the texture separately for each segment type including the background using so-called style transfer.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info