We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have noticed that in your paper, the L_s is set as 0. It is consistent with the code: def _select_loss(self, selected_logits, labels): loss1 = 0
def _select_loss(self, selected_logits, labels): loss1 = 0
But I am curious about the experiment result of a non-zero L_s. Does the accuracy decrease evidently?
The text was updated successfully, but these errors were encountered:
In our experiments, using GCN combiner and L_s at the same time may cause unstable training. We would take more experiments on this, thanks.
Sorry, something went wrong.
No branches or pull requests
I have noticed that in your paper, the L_s is set as 0. It is consistent with the code:
def _select_loss(self, selected_logits, labels): loss1 = 0
But I am curious about the experiment result of a non-zero L_s. Does the accuracy decrease evidently?
The text was updated successfully, but these errors were encountered: