Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to test the data #15

Open
Luchixiang opened this issue Feb 1, 2023 · 5 comments
Open

How to test the data #15

Luchixiang opened this issue Feb 1, 2023 · 5 comments

Comments

@Luchixiang
Copy link

Hi! Thank you for your awesome code. I have a question:
Suppose I have a set of volume data. How can I use the pretrained model to segment the ER and mito. I'm a little bit confused about the evaluation instruction.

@neptunes5thmoon
Copy link
Collaborator

Hi! Thanks for your interest and your question!
You can find model checkpoints on s3://janelia-cosem-networks. They are just tensorflow models (unfortunately an old version though) and there's also some scripts. If you just wanna predict individual blocks you can look at the inference mode in the unet_template.py files.
Personally, I use this code for efficient inference on volumes: https://github.com/saalfeldlab/simpleference. You can see how I use that in the inference_config.py files on s3.
For ER and mito you'd probably wanna start with setup03 if your data is close to 4nm isotropic or setup04 if your data is close to 8nm isotropic. Those are what we call the "many" networks in the manuscript.

When I talk about evaluation in the instructions I don't mean inference/prediction, but the metrics we computed for comparing to manual annotations on small blocks.

Hope this gets you started!

@Luchixiang
Copy link
Author

Luchixiang commented Feb 3, 2023

Hi! Thank you for your answer. I follow your instruction and use https://github.com/saalfeldlab/simpleference to test on part of the mouse liver dataset on openorganelle, but got unstatisfactory results.

I firstly convert the image stacks to n5 using z5py and use the command line in simpleference/example to do the inference(Modify the path and shape).

What's more, during the testing, I got a error from line66, simpleference/gunpowder/tensorflowbackend. ( assert output.ndim == 4). My output ndim is 3(the shape is 68 x 68 x 68, same as output shape). I simply expand_dims in the first dimension to solve this problem. (Don't know whether this will affect the final results and don't know why this problem will occur).

seg_test

@neptunes5thmoon
Copy link
Collaborator

Unfortunately I get an eror when trying to open your image but what you might be seeing is that the mouse liver dataset is a hard lift for the pretrained models because it's not well represented by the training data that was used. For the segmentations that are up on openorganelle we started collecting more training data.

Regarding the channel issue, I realize now that I gave you the link to the main repo, but I use my own fork https://github.com/neptunes5thmoon/simpleference where I made some edits to use the n5 backend for zarr instead of z5py and be more flexible with output channels (the model should be outputting one channel for each of the 14 classes it predicts). It's not well documented though. Sorry about that! Maybe try working with this script from the repo https://github.com/saalfeldlab/CNNectome/blob/master/CNNectome/inference/unet_inference.py

@neptunes5thmoon
Copy link
Collaborator

If you wanna work with pytorch instead, check out this repo to transfer the tensorflow weights to pytorch: https://github.com/pattonw/cnnectome.conversion

@Luchixiang
Copy link
Author

Thank you! I'll follow the instruction and try it again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants