You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks so much for the exciting benchmark, and I believe it would be an important resource for the research community.
By the way, we have just trained a LLM specially for math via training a data synthesis model, namely JiuZhang3.0. I have attached the download link below, and would you mind testing it on your benchmark? We have released the checkpoints of the 7B and 8X7B versions:
Thanks for your attention to MathBench. We have noticed that your model has impressive performance in mathematics, and we are pleased to conduct tests on MathBench with your model in the coming days.
Thanks so much for the exciting benchmark, and I believe it would be an important resource for the research community.
By the way, we have just trained a LLM specially for math via training a data synthesis model, namely JiuZhang3.0. I have attached the download link below, and would you mind testing it on your benchmark? We have released the checkpoints of the 7B and 8X7B versions:
The 7B version based on Mistral-7B:
https://huggingface.co/ToheartZhang/JiuZhang3.0-7B
The MOE version based on Mixtral-8X7B:
https://huggingface.co/ToheartZhang/JiuZhang3.0-8x7B
The text was updated successfully, but these errors were encountered: