zea.models.echonet

Echonet-Dynamic segmentation model for cardiac ultrasound segmentation.

To try this model, simply load one of the available presets:

>>> from zea.models.echonet import EchoNetDynamic

>>> model = EchoNetDynamic.from_preset("echonet-dynamic")

Important

This is a zea implementation of the model. For the original paper and code, see here.

Ouyang, David, et al. “Video-based AI for beat-to-beat assessment of cardiac function.” Nature 580.7802 (2020): 252-256

See also

A tutorial notebook where this model is used: Left ventricle segmentation.

Note

This model is only currently supported with the TensorFlow or JAX Backend. When using TensorFlow as backend, the model will work out of the box. When using JAX as backend, the model is built using TensorFlow and then converted to JAX. This requires both TensorFlow and JAX to be installed, which can be tricky regarding compatible CUDA versions. One option is to run in our Docker container, which has been tested to work with both backends.

Classes

EchoNetDynamic(*args, **kwargs)

EchoNet-Dynamic segmentation model for cardiac ultrasound segmentation.

class zea.models.echonet.EchoNetDynamic(*args, **kwargs)[source]

Bases: BaseModel

EchoNet-Dynamic segmentation model for cardiac ultrasound segmentation.

Preprocessing should normalize the input images with mean and standard deviation.

build(input_shape)[source]

Builds the network.

call(inputs)[source]

Segment the input image.

custom_load_weights(preset, **kwargs)[source]

Load the weights for the segmentation model.

maybe_convert_to_jax()[source]

Converts the network to JAX if backend is JAX.

JAX conversion traces the SavedModel using an example input of shape (1, INFERENCE_SIZE, INFERENCE_SIZE, 3). At runtime, call() may pass (B, INFERENCE_SIZE, INFERENCE_SIZE, 3) after resize/tile preprocessing.