Skip to content

Commit

Permalink
format enhance
Browse files Browse the repository at this point in the history
Signed-off-by: YunLiu <[email protected]>
  • Loading branch information
KumoLiu committed Sep 26, 2024
1 parent c29337a commit 9d243f6
Show file tree
Hide file tree
Showing 7 changed files with 96 additions and 72 deletions.
2 changes: 1 addition & 1 deletion active_learning/liver_tumor_al/active_learning.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@
parser = argparse.ArgumentParser(description="Active Learning Setting")

# Directory & Json & Seed
parser.add_argument("--base_dir", default="/home/vishwesh/experiments/al_sanity_test_apr27_2023", type=str)
parser.add_argument("--base_dir", default="./experiments/al_sanity_test_apr27_2023", type=str)
parser.add_argument("--data_root", default="/scratch_2/data_2021/68111", type=str)
parser.add_argument("--json_path", default="/scratch_2/data_2021/68111/dataset_val_test_0_debug.json", type=str)
parser.add_argument("--seed", default=102, type=int)
Expand Down
2 changes: 1 addition & 1 deletion active_learning/tool_tracking_al/active_learning.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
parser = argparse.ArgumentParser(description="Active Learning Settings")

# Directory & Json & Seed
parser.add_argument("--base_dir", default="/home/vishwesh/experiments/robo_tool_experiments/variance_sanity", type=str)
parser.add_argument("--base_dir", default="./experiments/robo_tool_experiments/variance_sanity", type=str)
parser.add_argument("--data_root", default="/scratch_2/robo_tool_dataset_2023", type=str)
parser.add_argument("--json_path", default="/scratch_2/robo_tool_dataset_2023/data_list.json", type=str)
parser.add_argument("--seed", default=120, type=int)
Expand Down
77 changes: 42 additions & 35 deletions generation/2d_vqvae/2d_vqvae_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,15 @@
"\n",
"The VQVAE can also be used as a generative model if an autoregressor model (e.g., PixelCNN, Decoder Transformer) is trained on the discrete latent representations of the VQVAE bottleneck. This falls outside of the scope of this tutorial.\n",
"\n",
"[1] - Oord et al. \"Neural Discrete Representation Learning\" https://arxiv.org/abs/1711.00937\n",
"\n",
"\n",
"### Setup environment"
"[1] - Oord et al. \"Neural Discrete Representation Learning\" https://arxiv.org/abs/1711.00937"
]
},
{
"cell_type": "markdown",
"id": "d167a850",
"metadata": {},
"source": [
"## Setup environment"
]
},
{
Expand All @@ -50,7 +55,7 @@
"id": "6b8ae5e8",
"metadata": {},
"source": [
"### Setup imports"
"## Setup imports"
]
},
{
Expand Down Expand Up @@ -118,32 +123,16 @@
"print_config()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f7f7056e",
"metadata": {},
"outputs": [],
"source": [
"# for reproducibility purposes set a seed\n",
"set_determinism(42)"
]
},
{
"cell_type": "markdown",
"id": "51a9a628",
"metadata": {},
"source": [
"### Setup a data directory and download dataset"
]
},
{
"cell_type": "markdown",
"id": "9b9b6e14",
"metadata": {},
"source": [
"Specify a `MONAI_DATA_DIRECTORY` variable, where the data will be downloaded. If not\n",
"specified a temporary directory will be used."
"## Setup data directory\n",
"\n",
"You can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. \n",
"This allows you to save results and reuse downloads. \n",
"If not specified a temporary directory will be used."
]
},
{
Expand All @@ -166,12 +155,30 @@
"print(root_dir)"
]
},
{
"cell_type": "markdown",
"id": "d49ee071",
"metadata": {},
"source": [
"## Set deterministic"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b010865",
"metadata": {},
"outputs": [],
"source": [
"set_determinism(42)"
]
},
{
"cell_type": "markdown",
"id": "049661aa",
"metadata": {},
"source": [
"### Download the training set"
"## Download the training set"
]
},
{
Expand Down Expand Up @@ -248,7 +255,7 @@
"id": "d437adbd",
"metadata": {},
"source": [
"### Visualise examples from the training set"
"## Visualise examples from the training set"
]
},
{
Expand Down Expand Up @@ -282,7 +289,7 @@
"id": "8c6ca19a",
"metadata": {},
"source": [
"### Download the validation set"
"## Download the validation set"
]
},
{
Expand Down Expand Up @@ -327,7 +334,7 @@
"id": "1cfa9906",
"metadata": {},
"source": [
"### Define network, optimizer and losses"
"## Define network, optimizer and losses"
]
},
{
Expand Down Expand Up @@ -377,7 +384,7 @@
"id": "331aa4fc",
"metadata": {},
"source": [
"### Model training\n",
"## Model training\n",
"Here, we are training our model for 100 epochs (training time: ~60 minutes)."
]
},
Expand Down Expand Up @@ -474,7 +481,7 @@
"id": "ab3f5e08",
"metadata": {},
"source": [
"### Learning curves"
"## Learning curves"
]
},
{
Expand Down Expand Up @@ -518,7 +525,7 @@
"id": "e7c7b3b4",
"metadata": {},
"source": [
"### Plotting evolution of reconstructed images"
"## Plotting evolution of reconstructed images"
]
},
{
Expand Down Expand Up @@ -559,7 +566,7 @@
"id": "517f51ea",
"metadata": {},
"source": [
"### Plotting the reconstructions from final trained model"
"## Plotting the reconstructions from final trained model"
]
},
{
Expand Down Expand Up @@ -595,7 +602,7 @@
"id": "222c56d3",
"metadata": {},
"source": [
"### Cleanup data directory\n",
"## Cleanup data directory\n",
"\n",
"Remove directory if a temporary was used."
]
Expand Down
57 changes: 35 additions & 22 deletions generation/2d_vqvae_transformer/2d_vqvae_transformer_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,15 @@
"\n",
"[1] - Oord et al. \"Neural Discrete Representation Learning\" https://arxiv.org/abs/1711.00937\n",
"\n",
"[2] - Tudosiu et al. \"Morphology-Preserving Autoregressive 3D Generative Modelling of the Brain\" https://arxiv.org/abs/2209.03177\n",
"\n",
"\n",
"### Setup environment"
"[2] - Tudosiu et al. \"Morphology-Preserving Autoregressive 3D Generative Modelling of the Brain\" https://arxiv.org/abs/2209.03177"
]
},
{
"cell_type": "markdown",
"id": "3a0642b8",
"metadata": {},
"source": [
"## Setup environment"
]
},
{
Expand All @@ -51,7 +56,7 @@
"id": "e3440cd3",
"metadata": {},
"source": [
"### Setup imports"
"## Setup imports"
]
},
{
Expand Down Expand Up @@ -129,26 +134,16 @@
"print_config()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e11e1e9c",
"metadata": {},
"outputs": [],
"source": [
"# for reproducibility purposes set a seed\n",
"set_determinism(42)"
]
},
{
"cell_type": "markdown",
"id": "4f71d660",
"metadata": {},
"source": [
"### Setup a data directory and download dataset\n",
"## Setup data directory\n",
"\n",
"Specify a `MONAI_DATA_DIRECTORY` variable, where the data will be downloaded. If not\n",
"specified a temporary directory will be used."
"You can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. \n",
"This allows you to save results and reuse downloads. \n",
"If not specified a temporary directory will be used."
]
},
{
Expand All @@ -171,12 +166,30 @@
"print(root_dir)"
]
},
{
"cell_type": "markdown",
"id": "0bdd379a",
"metadata": {},
"source": [
"## Set deterministic training for reproducibility"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8a5c290d",
"metadata": {},
"outputs": [],
"source": [
"set_determinism(42)"
]
},
{
"cell_type": "markdown",
"id": "c6975501",
"metadata": {},
"source": [
"### Download training data"
"## Download training data"
]
},
{
Expand Down Expand Up @@ -252,7 +265,7 @@
"id": "9eb87583",
"metadata": {},
"source": [
"### Visualse some examples from the dataset"
"## Visualse some examples from the dataset"
]
},
{
Expand Down Expand Up @@ -286,7 +299,7 @@
"id": "a9f6b281",
"metadata": {},
"source": [
"### Download Validation Data"
"## Download Validation Data"
]
},
{
Expand Down
10 changes: 8 additions & 2 deletions generation/maisi/maisi_inference_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,14 @@
"\n",
"# MAISI Inference Tutorial\n",
"\n",
"This tutorial illustrates how to use trained MAISI model and codebase to generate synthetic 3D images and paired masks.\n",
"\n",
"This tutorial illustrates how to use trained MAISI model and codebase to generate synthetic 3D images and paired masks."
]
},
{
"cell_type": "markdown",
"id": "301dab0b",
"metadata": {},
"source": [
"## Setup environment"
]
},
Expand Down
10 changes: 8 additions & 2 deletions generation/maisi/maisi_train_vae_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,14 @@
"\n",
"# MAISI VAE Training Tutorial\n",
"\n",
"This tutorial illustrates how to train the VAE model in MAISI on CT and MRI datasets. The VAE model is used for latent feature compression, which significantly reduce the memory usage of the diffusion model. The released VAE model weights can work on both CT and MRI images.\n",
"\n",
"This tutorial illustrates how to train the VAE model in MAISI on CT and MRI datasets. The VAE model is used for latent feature compression, which significantly reduce the memory usage of the diffusion model. The released VAE model weights can work on both CT and MRI images."
]
},
{
"cell_type": "markdown",
"id": "12ff48d3",
"metadata": {},
"source": [
"## Setup environment"
]
},
Expand Down
10 changes: 1 addition & 9 deletions vista_3d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,7 @@
The codebase is under Apache 2.0 Licence. The model weight is released under [NVIDIA OneWay Noncommercial License](./NVIDIA%20OneWay%20Noncommercial%20License.txt).

## Reference

```
@article{he2024vista3d,
title={VISTA3D: Versatile Imaging SegmenTation and Annotation model for 3D Computed Tomography},
author={He, Yufan and Guo, Pengfei and Tang, Yucheng and Myronenko, Andriy and Nath, Vishwesh and Xu, Ziyue and Yang, Dong and Zhao, Can and Simon, Benjamin and Belue, Mason and others},
journal={arXiv preprint arXiv:2406.05285},
year={2024}
}
```
[1] Yufan He, Pengfei Guo, Yucheng Tang, Andriy Myronenko, Vishwesh Nath, Ziyue Xu, Dong Yang, Can Zhao, Benjamin Simon, Mason Belue, Stephanie Harmon, Baris Turkbey, Daguang Xu and Wenqi Li: "VISTA3D: Versatile Imaging SegmenTation and Annotation model for 3D Computed Tomography". (2024), [arXiv](https://arxiv.org/abs/2406.05285)

## Acknowledgement
- [segment-anything](https://github.com/facebookresearch/segment-anything)
Expand Down

0 comments on commit 9d243f6

Please sign in to comment.