Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge packing folders #75

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
# Lumbar rootlets

## Getting started

> [!IMPORTANT]
> This README provides instructions on how to use the model for **_lumbar_** rootlets.
> Please note that this model is still under development and is not yet available in the Spinal Cord Toolbox (SCT).

> [!NOTE]
> If you would like to use the model for _**dorsal**_ cervical rootlets only, use SCT v6.2 or higher (please refer to
> this [README](..%2FREADME.md)).
> For the stable model for dorsal cervical rootlets only, use SCT v6.2 or higher (please refer to this [README](..%2FREADME.md)).

### Dependencies

Expand Down Expand Up @@ -41,7 +42,7 @@ conda activate venv_nnunet
3. Install the required packages with the following command:
```
cd model-spinal-rootlets
pip install -r packaging_lumbar_rootlets/requirements.txt
pip install -r packaging/requirements.txt
```

### Step 3: Getting the Predictions
Expand All @@ -60,19 +61,19 @@ To segment a single image using the trained model, run the following command fro
This assumes that the lumbar model has been downloaded and unzipped (`unzip Dataset202_LumbarRootlets_r20240527.zip` or `unzip Dataset302_LumbarRootlets_r20240723.zip`).

```bash
python packaging_lumbar_rootlets/run_inference_single_subject.py -i <INPUT> -o <OUTPUT> -path-model <PATH_TO_MODEL_FOLDER> -fold <FOLD>
python packaging/run_inference_single_subject.py -i <INPUT> -o <OUTPUT> -path-model <PATH_TO_MODEL_FOLDER> -fold <FOLD>
```

For example:

```bash
python packaging_lumbar_rootlets/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/Dataset202_LumbarRootlets_r20240527 -fold 0
python packaging/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/Dataset202_LumbarRootlets_r20240527 -fold 0
```

If the model folder contains also trainer subfolders (e.g., `nnUNetTrainer__nnUNetPlans__3d_fullres`, `nnUNetTrainerDA5__nnUNetPlans__3d_fullres`, ...), specify the trainer folder as well:

```bash
python packaging_lumbar_rootlets/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/Dataset322_LumbarRootlets/nnUNetTrainerDA5__nnUNetPlans__3d_fullres -fold 0
python packaging/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/Dataset322_LumbarRootlets/nnUNetTrainerDA5__nnUNetPlans__3d_fullres -fold 0
```

> [!TIP]
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
# Ventral and dorsal rootlets

## Getting started

> [!IMPORTANT]
>️ This README provides instructions on how to use the model for **_ventral_** and dorsal rootlets.
>️ This README provides instructions on how to use the model for segmentation of **_ventral_** and dorsal rootlets from T2w images.
> Please note that this model is still under development and is not yet available in the Spinal Cord Toolbox (SCT).

> [!NOTE]
> For the stable model for dorsal rootlets only, use SCT v6.2 or higher (please refer to this [README](..%2FREADME.md)).
> For the stable model for dorsal cervical rootlets only, use SCT v6.2 or higher (please refer to this [README](..%2FREADME.md)).

### Dependencies

Expand Down Expand Up @@ -40,7 +42,7 @@ conda activate venv_nnunet
3. Install the required packages with the following command:
```
cd model-spinal-rootlets
pip install -r packaging_ventral_rootlets/requirements.txt
pip install -r packaging/requirements.txt
```

### Step 3: Getting the Predictions
Expand All @@ -60,19 +62,19 @@ This assumes that the model has been downloaded (https://github.com/ivadomed/mod
and unzipped (`unzip model-spinal-rootlets_ventral_D106_r20240523.zip`).

```bash
python packaging_ventral_rootlets/run_inference_single_subject.py -i <INPUT> -o <OUTPUT> -path-model <PATH_TO_MODEL_FOLDER> -fold <FOLD>
python packaging/run_inference_single_subject.py -i <INPUT> -o <OUTPUT> -path-model <PATH_TO_MODEL_FOLDER> -fold <FOLD>
```

For example:

```bash
python packaging_ventral_rootlets/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/model-spinal-rootlets_ventral_D106_r20240523 -fold all
python packaging/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/model-spinal-rootlets_ventral_D106_r20240523 -fold all
```

If the model folder contains also trainer subfolders (e.g., `nnUNetTrainer__nnUNetPlans__3d_fullres`, `nnUNetTrainerDA5__nnUNetPlans__3d_fullres`, ...), specify the trainer folder as well:

```bash
python packaging_lumbar_rootlets/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/model-spinal-rootlets_ventral_D106/nnUNetTrainerDA5__nnUNetPlans__3d_fullres -fold 0
python packaging/run_inference_single_subject.py -i sub-001_T2w.nii.gz -o sub-001_T2w_label-rootlets_dseg.nii.gz -path-model ~/Downloads/model-spinal-rootlets_ventral_D106/nnUNetTrainerDA5__nnUNetPlans__3d_fullres -fold 0
```

> [!NOTE]
Expand Down
File renamed without changes.
251 changes: 0 additions & 251 deletions packaging_lumbar_rootlets/run_inference_single_subject.py

This file was deleted.

4 changes: 0 additions & 4 deletions packaging_ventral_rootlets/requirements.txt

This file was deleted.