Replies: 1 comment 1 reply
-
Hi @dimitry12 If you can provide more details (e.g., how does the input and output look? Then what would be student and teacher models?), I can help you implement it. You will not need to edit the executable script a lot. The executable script is more or less the same with other tasks, and the main differences are validation/test-related implementations and model's input and output since all the other components are generalized and handled by training box / distillation box that is defined by a yaml config file. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
torchdistill
is an impressive framework and I am only starting to understand how to apply it to tasks outside of image classification.One such task is contrastive representation learning. CLIP is a good example of a model from this space. Is it correct that I would need to implement a custom "trainer" executable (similar to examples for other tasks) when distilling representation encoder?
Beta Was this translation helpful? Give feedback.
All reactions