From 2e5d1de964b82efaa0acf4b3e2b9e101dfbe7be0 Mon Sep 17 00:00:00 2001 From: Can-Zhao Date: Thu, 21 Nov 2024 19:48:59 +0000 Subject: [PATCH] readme Signed-off-by: Can-Zhao --- generation/maisi/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/generation/maisi/README.md b/generation/maisi/README.md index 367e31271..3093a667e 100644 --- a/generation/maisi/README.md +++ b/generation/maisi/README.md @@ -2,7 +2,7 @@ This example demonstrates the applications of training and validating NVIDIA MAISI, a 3D Latent Diffusion Model (LDM) capable of generating large CT images accompanied by corresponding segmentation masks. It supports variable volume size and voxel spacing and allows for the precise control of organ/tumor size. ## MAISI Model Highlight -- A Foundation Variational Auto-Encoder (VAE) model for latent feature compression that works for both CT and MRI with flexible volume size and voxel size +- A Foundation Variational Auto-Encoder (VAE) model for latent feature compression that works for both CT and MRI with flexible volume size and voxel size. Tensor parallel is included to reduce GPU memory usage. - A Foundation Diffusion model that can generate large CT volumes up to 512 × 512 × 768 size, with flexible volume size and voxel size - A ControlNet to generate image/mask pairs that can improve downstream tasks, with controllable organ/tumor size