LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

P–166 Machine learning for automated cell segmentation in embryos

Photo from wikipedia

Is it possible to automate the process of detecting individual blastomeres within a 4-cell embryo? Deep learning models are capable of identifying individual cells in single focal plane images of… Click to show full abstract

Is it possible to automate the process of detecting individual blastomeres within a 4-cell embryo? Deep learning models are capable of identifying individual cells in single focal plane images of 4-cell embryos. As individual blastomeres within a 4-cell embryo maintain totipotency, their intercellular junctions are critical in maintaining and directing communication. These junctions are determined by the zygote’s cleavage patterns, and can affect the overall embryo ‘shape’, which can be described as either ‘tetrahedral’ or ‘planar’. Planar embryos carry significantly worse outcomes both in the short and long term, such as more compromised blastulation, clinical pregnancy and live birth rates. Therefore, more accurate identification of cell borders at the 4-cell stage may contribute to improved classification of cell shape and embryo visualisation. This was a retrospective cohort analysis of 222 single focal plane images from 3 clinics. Each image captured an embryo at the 4-cell stage and was taken using the Embryoscope™ time-lapse incubator at the central focal plane. Images from two of the clinics were split into training (n = 161) and validation (n = 17) sets. Images from the third clinic formed a blind testing set (n = 44). Ground truth masks were manually created by two human operators using the VGG Image Annotator software. A MaskRCNN neural network model with a pre-trained ResNet–50 backbone was trained to segment individual blastomeres from training images. Data augmentation (flips, rotations, Gaussian noise, cropping, brightness changes and optical distortion) were used during the training process. The model’s performance was evaluated using the IoU metric (a measure of overlap between model-predicted and human-annotated masks). The model was evaluated on a blind test set of 44 images. The model had a mean IoU of 0.92 for individual cells (Standard Deviation (SD) = 0.05) with precision and sensitivity of 0.95 and 0.97 respectively. The mean IoU for the entire embryo (in relation to all 4 blastomeres combined) was 0.92 (SD 0.02). Furthermore, the model was able to count the number of cells in the images with 70% accuracy, and deviating by no more than 1 cell in each error. The nature of these errors can be broken down into the detection of fragmentation as a cell (2 cases); the detection of two cells as one (1 case); the cell being directly under another cell (4 cases); and duplicate detection of the same cell (6 cases). This last issue could be resolved by rejecting detections with significant overlap. Our results demonstrate that our model can be used across different clinics. Inaccuracies in segmentation and cell counting sometimes occurred when a cell’s borders were unclear or obscured (e.g. in a different focal plane). The inclusion of multiple focal planes will be key for improving performance. Moreover, as only one focal plane was used, ambiguous cases were annotated with a ‘best guess’. Wider implications of the findings: The creation of a model capable of detecting individual cells would be highly beneficial in the IVF industry. Aside from automating laborious processes for embryologists, it may also prove a useful tool for future research such as in identifying intercellular contact points or rendering three-dimensional embryo visualisations. N/A

Keywords: learning; model; embryo; focal plane; cell

Journal Title: Human Reproduction
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.