Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于提升数据集测试有效性的建议 #70

Open
wsn555 opened this issue Jul 25, 2024 · 7 comments
Open

关于提升数据集测试有效性的建议 #70

wsn555 opened this issue Jul 25, 2024 · 7 comments

Comments

@wsn555
Copy link

wsn555 commented Jul 25, 2024

我们观察到目前longbench的数据集的多数任务的“答案”都集中在末尾,这导致许多方法通过直接舍弃中间的大量文本(例如streamllm)也能取得很好的结果,不利于综合合理的评价各种方法。可以尝试随机地在一些样本的末尾处(question 前面)插入一些和question无关的context,改变“答案”的位置,从而评测更加可靠。

@bys0318
Copy link
Member

bys0318 commented Jul 25, 2024

谢谢你的建议。longbench中synthetic tasks是这样随机构造的,即我们把evidence的段落放在context的随机位置。在其他任务中,为了保证和真实场景分布一致,我们避免用这种人造方式改变原先的context。这种答案分布的bias在真实场景中也往往是存在的——例如文章的开头、末尾一般更加重要。

@wsn555
Copy link
Author

wsn555 commented Jul 25, 2024

感谢回复,但是我试着只使用最末尾的1k token进行推理,精度和使用全量数据接近,这不合理啊。

@wsn555
Copy link
Author

wsn555 commented Jul 25, 2024

哦哦 我明白你的意思了 你的意思是 人类语言天然的就存在evidence偏向于在首尾? 好吧,这样就难以区分哪些是真正有效的策略了。

@YouhuiBai
Copy link

In our practical testing process, we’ve encountered a similar issue.
We’ve noticed that various methods claiming to compress the sequence dimensions of KV caches (such as different heads drop different part of the sequence) and approaches for handling long sequence inference (like streamLLM) perform well on LongBench. This is because they achieve high scores by retaining only a small portion of the text at the end of the sequence. However, tasks like ‘finding a needle in a haystack’ can more accurately evaluate a model’s capabilities because they hide the ‘needle’ in different positions. Unfortunately, ‘finding a needle in a haystack’ doesn’t cover all aspects comprehensively. If possible, I hope the authors can enhance the LongBench mechanism.

@bys0318
Copy link
Member

bys0318 commented Aug 17, 2024

Thanks for your suggestion. We will consider updating LongBench.

@YueChenkkk
Copy link

感谢回复,但是我试着只使用最末尾的1k token进行推理,精度和使用全量数据接近,这不合理啊。

很好奇哪些task上会有这种情况,可以说一下嘛

@wsn555
Copy link
Author

wsn555 commented Aug 26, 2024

例如triviaqa,samsum,lcc等,你如果只保留末尾的四分之一的文本,和开头的prompt,你几乎能得到一样的分值。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants