Standard Plane Extraction from 3D Ultrasound with DQN agent

Automatically extract clinically valuable cross-section image of a free-hand 3D US volume within a deep reinforcement learning framework.

Authors: Baichuan Jiang, Keshuai Xu, Dr. Ernest Graham, Dr. Russell H. Taylor, Dr. Mathias Unberath, Dr. Emad M. Boctor

Abstract:

Fetal biometry and anatomical survey are usually done by sonographers manually searching for a set of standard plane (SP) images using 2D ultrasound probe. This task is nontrivial because it requires high skills of hand-eye coordination for maneuvering the 2D ultrasound probe, which could be even more challenging when fetus is at unfavorable pose or mother exhibits a high body mass index (BMI). Therefore, we envisioned a portable 3D ultrasound device that would help capture standard plane images from 3D volumes at home in challenging cases to avoid repetitive hospital visits. In this paper, we describe our first steps toward automatic extraction of SP images from volumetric data.

Key words: Deep reinforcement learning, 3D ultrasound, Standard plane extraction.

Fig 1. Example result visualization. The sample retrieved slice has 4.13mm and 2.78 deg pose error with the target, which is around the average error we obtained for the test data


Methodology and results

We use a deep Q-network agent to model the process of iteratively searching for the target 2D view from a 3D volume. Given a random initial pose of the starting plane, the network is able to produce a 6-Degree-of-Freedom (DOF) discrete action
that moves the plane one step closer to the target.

Fig. 2: Reinforcement learning (DQN) framework overview: (a) DQN Agent: an autonomous agent interacting with the environment by taking in different observations (resampled slice) and making action decision for the next step. (b) Slice generator: given a state theta_i, the resampler will interpolate the volume environment to produce a new slice (c) Trainer: the history of environment exploration will be saved into memory replay buffer as 4-tuples. The network will be updated by taking batches of the tuples and doing error back-propagation from the loss.

Our phantom experiment shows that the trained agent is able to correctly extract the target view image from a newly acquired 16cm depth large 3D ultrasound test volume within 150 action steps and achieve an average error of 5.51 mm and 2.26 deg, from different starting pose initializations.

Fig. 3. Overall distribution of error before and after running the agent. Average error before running the algorithm is 23.6mm and 33.57deg, while after running the agent, extracted slices have an average error of 5.51mm and 2.26deg with the ground truth.

Publication:

[1] Standard Plane Extraction From 3D Ultrasound With 6-DOF Deep Reinforcement Learning Agent (IUS 2020) (DOI link to be updated)