Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Command terminated by signal 8 #345

Open
clemgoub opened this issue Oct 25, 2023 · 2 comments
Open

Command terminated by signal 8 #345

clemgoub opened this issue Oct 25, 2023 · 2 comments

Comments

@clemgoub
Copy link

Hello pggb team!

I've been able to run pggb successfuly on a couple of genomes (~150Mbp each, eukaryotes), and ramped up the analysis to 14 genomes. However, I encountered a new error (signal 8) after the pipeline produces the GFA:

I am using pggb from the main git branch, b2843d0
I built a docker image from the Dockerfile, using the method to build locally.

Then:

docker run -it -v /home/cgoubert/projects/Fly/data/genomes/d_melanogaster/Rech2022/pggb:/data cgoubert/pggb /bin/bash -c "pggb -i /data/dm6.plus.Rech13_PanSN-spec.fa.gz -n 14 -p 90 -s 5000 -V dm6:200 -o /data/102423_pggb_dm6.Rech13 -t 32"
[...]
[seqwish::transclosure] 566.773 100.00% building node_iitree and path_iitree indexes
[seqwish::transclosure] 566.775 100.00% done
[seqwish::transclosure] 566.795 done with transitive closures
[seqwish::compact] 566.795 compacting nodes
[seqwish::compact] 568.448 done compacting
[seqwish::compact] 568.644 built node index
[seqwish::links] 568.644 finding graph links
[seqwish::links] 568.649 links derived
[seqwish::gfa] 568.649 writing graph
Command terminated by signal 8

The output folder shows:

-rw-r--r-- 1 cgoubert repeat 1.7K Oct 24 11:56 dm6.plus.Rech13_PanSN-spec.fa.gz.0ec1561.417fcdf.8abe853.smooth.10-24-2023_18:56:13.log
-rw-r--r-- 1 cgoubert repeat 1.5K Oct 24 11:56 dm6.plus.Rech13_PanSN-spec.fa.gz.0ec1561.417fcdf.8abe853.smooth.10-24-2023_18:56:13.params.yml
-rw-r--r-- 1 cgoubert repeat 5.2M Oct 24 20:15 dm6.plus.Rech13_PanSN-spec.fa.gz.0ec1561.417fcdf.8abe853.smooth.10-24-2023_18:59:58.log
-rw-r--r-- 1 cgoubert repeat 1.5K Oct 24 11:59 dm6.plus.Rech13_PanSN-spec.fa.gz.0ec1561.417fcdf.8abe853.smooth.10-24-2023_18:59:58.params.yml
-rw-r--r-- 1 cgoubert repeat 1.9G Oct 24 20:15 dm6.plus.Rech13_PanSN-spec.fa.gz.0ec1561.417fcdf.seqwish.gfa
-rw-r--r-- 1 cgoubert repeat 525M Oct 24 20:05 dm6.plus.Rech13_PanSN-spec.fa.gz.0ec1561.alignments.wfmash.paf
-rw-r--r-- 1 cgoubert repeat 2.5M Oct 24 12:03 dm6.plus.Rech13_PanSN-spec.fa.gz.0ec1561.mappings.wfmash.paf
drwx------ 2 cgoubert repeat  164 Oct 24 20:15 seqwish-3xM1GS

Thanks for your help!

Clément

@AndreaGuarracino
Copy link
Member

Can you try again, but specifying a temporary folder where you are sure to have lots of space and that is a fast SSD? Usually, it is -D /scratch on clusters

@clemgoub
Copy link
Author

clemgoub commented Nov 6, 2023

Dear @AndreaGuarracino, sorry I can only reply now. The server I'm currently using is a single node without a dedicated scratch disk. We have plenty of room for storage however on the disk I used for outputs. Would there still be some workaround/testing I could implement?

I am curious to know why it returns a signal 8 (floating point exception) if we have a disk issue? Would it be indicating that it is taking too long to write?

Thanks for you help!

Clém

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants