AI: The intelligent approach to live cell imaging

article
Article image: 
Dr. Mike Woerdemann
Olympus Soft Imaging Solutions GmbH, Johann-Krane-Weg 39, 48149 Münster, Germany. 
 
Biography
Mike Woerdemann did his PhD in the area of optical tweezers based on structured light fields. He worked as application specialist for Olympus high-content screening systems for many years and during that time he gained in-depth practical knowledge of the field. Mike now is responsible as product manager for the high-content screening as well as for the deep learning products developed at Olympus Soft Imaging Solutions.
 
Abstract
AI-based self-learning microscopy has the potential to make highthroughput image analysis easier and less time consuming. We used a deep neural network to identify and segment cell nuclei under challenging imaging conditions: (1) labeled with DAPI and excited with ultra-low light exposure and (2) unlabeled using brightfield images alone. Results show that thanks to the self-learning microscopy approach these tasks could be carried out fast, with high precision and without requiring extensive human input.
 
Corresponding author
Dr. Mike Woerdemann
Product Manager
Life Science Research
+49 251 79 800 0

Introduction

Artificial intelligence (AI) is making fast inroads into many scientific fields – and the world of microscopy is no exception. With the capabilities of microscopes increasing, so is the amount of data generated and the possibilities to use microscopy images to push the boundaries of our understanding.

In the past, the power of microscopy to support or disprove scientific hypotheses has long been limited by scale; the time associated with capturing, quantifying and analyzing large numbers of images was often prohibitive. In recent times, high-content screening stations have removed barriers to increasing the scale of experiments by several orders of magnitude. More recently, the advent of deep-learning-based AI has been shown to enable the same leap forward in the analysis phase of a study.

In live cell imaging, the challenge of AI-based approaches is to not just detect cells, but also retrieve as much information as possible from each cell without requiring any human input. By overcoming this challenge, AI can streamline the process of data analysis significantly, making it possible to expand the size of a study without analysis time spiraling out of control.

Two examples of life science microscopy applications where AI-based image analysis holds great promise are imaging in low light conditions and imaging without fluorescent labels. Here, we describe the rationale, workflow and results of our work in these areas, demonstrating how AI benefits research in these applications and delivers robust, reliable data.

Reliable analysis of fluorescently labeled cells in ultra-low light

Rationale

In modern life science microscopy, the use of fluorescent labels is invaluable. However, a key concern for many studies that use fluorescence is the effect that the excitation light has on the cell itself. Even when processes such as photodamage or phototoxicity do not directly affect cell viability, there may still be undesired effects that are not directly observed. These effects are an important reason to minimize light exposure wherever possible – particularly in long-term studies.

The central trade-off is of course signal intensity. A lower fluorescence signal decreases the signal-to-noise ratio (SNR), making it difficult to carry out quantitative analyses on the images. The aim of this study was to investigate whether AI can bring down the limits of light exposure in cell image analysis without compromising on reliability.

Method

To test the capabilities of our AI-based approach, we imaged a full 96-well plate with fixed, DAPI-labeled HeLa cells using the Olympus scanR high-content screening (HCS) station. We took images at nine positions per well using optimal conditions and three different sub-optimal conditions:

100%     200 ms exposure time and 100% LED excitation light intensity

2%         4 ms exposure time and 100% LED excitation light intensity

0.2%      4 ms exposure time and 10% LED excitation light intensity

0.05%    4 ms exposure time and 2.5% LED excitation light intensity

We analyzed the resulting images with the scanR HCS software, which analyzes images using a deep neural network based on a convolutional neural network (CNN) architecture. These neural networks are highly adaptable in challenging image analysis tasks, making them well suited for quantitative analysis of nuclei in low light conditions.

During the initial training phase, the network is fed with pairs of example images and object masks where the objects of interest are annotated – so called ‘ground truth’ data. The neural networks then automatically learn how to predict the desired parameters, for example positions and contours of cells or other structures.

In other machine learning approaches, these ground truth annotations such as cell boundaries often need to be provided by humans. As neural networks need a large amount of training data to work reliably, this can be a time-consuming step. The scanR HCS software uses so-called self-learning microscopy, whereby the ground truth data is generated automatically. After training, the network can be applied to new images and predict the object masks with very high precision. In this study, we used 90 wells for training and 6 for validation. Training was done on a PC with a NVIDIA GeForce GTX 1070 GPU.

We validated the results of the neural network by analyzing two wells per illumination condition. We counted the nuclei, measured the mean intensity of the DAPI signal, and determined nuclei positions, areas and contours. We also plotted the results in an area vs. intensity scatter plot to visualize two distinct populations: cells in the G1 (single DNA content) and G2 stage (double DNA content) of the cell cycle.

Results and discussion

Figure 1 shows the effect of reducing the light exposure on image quality. It shows that as the light exposure goes down, the SNR also drops, nearly to the point where the signal strength reaches the detection limit of the camera. After training the neural network, we applied the network to these images; the resulting contours are shown in figure 2. Although the network detected every cell in this example, cell contours are significantly different at the lowest (0.05%) light exposure level. This indicates that the light exposure limit of this approach lies between 0.2% and 0.05% of optimal exposure.



Figure 1.
 From left to right: DAPI-labeled nuclei of HeLa cells at 100%, 2%, 0.2% and 0.05% exposure (contrast optimized for each exposure level for visualization only)



Figure 2.
 Contours given by the neural network in images with light exposure levels of (a) 100%, (b) 2%, (c) 0.2% and (d) 0.05% (contrast optimized for each exposure level for visualization only).

To validate the output of the neural network at different light exposure levels further, we used the image data to determine the percentage of cells in the G2 stage of the cell cycle. In these cells, the double DNA content creates a stronger DAPI signal. When plotting the area of the nucleus against the mean intensity of the DAPI signal, the results under optimal conditions (Figure 3a) show two populations: cells in G1 (bottom left) and G2 (top right).

Carrying out the same analysis at 2% exposure (figure 3b) dramatically reduces the dynamic range. However, after rescaling the plot, the same distinct populations are visible in a similar ratio (Table 1). Even when this method was applied to images exposed at 0.2% (Figure 3c), the percentage of cells in G2 is nearly the same.



Table 1.
 Parameters of the cell population determined by the neural network based on different exposure levels.

However, at the lowest exposure level (0.05%), the dynamic range is reduced to the point where the two distinct populations are no longer clearly visible (Figure 3d). As a result, the neural network overestimates the total cell number by around 4% and the percentage of cells in G2 by around 1%, indicating the limit for high-precision results.

These results clearly show how an AI-based approach can generate reliable, quantifiable data from images taken under challenging conditions. Use of a self-learning microscopy approach also means that training the network is fast and does not require a large amount of human input. As a result, AI-based image analysis is well suited to cope with new experimental conditions and parameters, and with large amounts of data, without a large increase in time spent on analysis.



Figure 3.
 Intensity vs. area plots of images taken at (a) optimal SNR (100% light exposure) and (b-d) reduced SNR (2%, 0.2%, 0.05% light exposure, respectively).

Robust label-free detection and segmentation of cell nuclei in microwell plates

Rationale

Switching from fluorescent dyes to label-free detection can bring important benefits to live cell imaging. Aside from the obvious simplicity in sample preparation, label-free analysis reduces phototoxicity and saves a fluorescence channel for other markers. It can also improve cell viability by avoiding stress from transfection or chemical markers. Although many fluorescent labels are irreplaceable, basic identification, segmentation and quantification of cells does not require fluorescent labeling.

Low signal intensity and the resulting high noise levels are the main barriers to producing high quality data. In live cell imaging, constraints such as particles in suspension, condensation and shading at edges of wells also make reliable imaging more difficult. The use of AI can fulfil a key role here as it can process and analyze large amounts of data without requiring a lot of human input. In this study, we examine the application of the AI-based self-learning microscopy concept to carry out segmentation of cell nuclei from unstained brightfield images.

Method

To test the ability of AI to generate robust data from brightfield images that can rival fluorescence, we studied fixed, submerged HeLa cells in a 96-well plate. We took 40 fluorescence images per well using the histone 2B GFP as a marker for the nucleus. For brightfield imaging, we imaged the same locations three times with a step size of 6 µm in order to include focused as well as defocused images. All imaging was done on an Olympus scanR HCS station with a 10x super apochromat objective (NA = 0.6).

For training the neural network, we used fluorescence and brightfield images from five wells. The network uses the fluorescence images as ground truth data for the brightfield counterpart. In machine learning, training data such as object boundaries often need to be annotated by humans, making it a highly time-consuming step in the analysis. The software’s ability to make the annotations automatically through self-learning microscopy provides the opportunity to generate large amounts of training data, leading to more reliable results. It makes it possible for the neural network to adapt to variations and distortions during the training, resulting in a learned neural network model that is robust to all these issues.

After training, we applied the model to the remaining brightfield images. The software analyzed these and provided as output probability images, which indicates the likelihood of a pixel being part of a nucleus. From these images we generated cell counts for each well and circularity vs. area scatter plots. We also visualized 100 randomly selected cells for visual validation of the results.

Results and discussion

After training, the HCS software detected 1.13 million nuclei in the brightfield images compared to 1.10 million in the fluorescence channel. Figure 4 shows an example of a probability image generated by the HCS software, based on a brightfield image. It shows almost perfect segmentation of nuclei, which makes it highly suitable for automated cell counting and other analyses such as measuring area or shape.



Figure 4.
 (a): Probability image of AI prediction of nuclei positions from the brightfield image. (b): Overlay of the probability on the original brightfield image.

In order to investigate the difference between the fluorescence and brightfield datasets, we examined brightfield, fluorescence and probability images of 100 randomly selected nuclei (figure 5). They show perfect correspondence between the sets, meaning that in this sample no nuclei were missed or counted double by the software.



Figure 5.
 Brightfield (b), fluorescence (a) and probability images (c) of 100 randomly selected nuclei.

Next, we plotted all nuclei on a circularity vs. area plot (figure 6). Here, we discovered a potential cause for the difference in overall nuclei count between the two methods. The number of unusually large objects (>800 pixels) detected was higher in the fluorescence images, these plots revealed 22,000 (2%) unusually large objects in the fluorescence plot compared to 7,000 (0.6%) in the AI plot. This could indicate the neural network correctly identified clusters of two or more cells close together as separate objects, whereas the in the GFP image these were identified as one large object.



Figure 6.
 Scatter plot showing circularity vs. area distribution of the 1.10 million nuclei detected in the GFP channel (top) and the 1.13 million nuclei detected in the brightfield channel by AI (bottom). The yellow rectangles indicate unusually large objects.

To confirm this hypothesis, we visualized 100 objects that were identified by fluorescence as unusually large; we also examined the corresponding brightfield and probability images (figure 7). Many brightfield images of these objects did indeed show clusters rather than single cells. Most of these were correctly identified in the probability image, demonstrating the better separation of nuclei in close contact by the neural network.



Figure 7.
 Fluorescence, brightfield and probability images of 100 objects identified as unusually large (>800 pixels).

Conclusion

The role of AI in life science is rapidly expanding and also holds great potential for microscopy. Applications such as high-content imaging generate large datasets, which are often impossible to analyze manually. AI can be used to ensure that large, high-quality imaging datasets get the thorough analysis they deserve.

Here, we demonstrated the use of the AI-based HCS software of the Olympus scanR. In the first study we used a neural network to identify cell nuclei in increasingly noisy fluorescence images as we brought down the light exposure to a fraction of the optimal exposure. The second study revolved around eliminating a fluorescent nuclear label and identifying nuclei from brightfield images alone. These techniques can provide important benefits to high-throughput live cell imaging, such as reducing phototoxicity, simplifying sample preparation and saving a fluorescence channel for a different marker.

Our results demonstrate how the neural networks used here are highly capable of carrying out the complex task of learning to identify and segment objects based on training data provided with minimal human input. As a result, researchers using this type of neural network can be confident that it can generate robust, comprehensive and quantifiable data from large image sets without spending excessive amounts of time on training.

Website developed by S8080 Digital Media