Skip to content

Official Repo of "CIBench: Evaluation of LLMs as Code Interpreter "

License

Notifications You must be signed in to change notification settings

open-compass/CIBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CIBench: Evaluating Your LLMs with a Code Interpreter Plugin

license

✨ Introduction

This is an evaluation harness for the benchmark described in CIBench: Evaluating Your LLMs with a Code Interpreter Plugin.

[Paper] [Project Page] [LeaderBoard]

While LLM-Based agents, which use external tools to solve complex problems, have made significant progress, benchmarking their ability is challenging, thereby hindering a clear understanding of their limitations. In this paper, we propose an interactive evaluation framework, named CIBench, to comprehensively assess LLMs' ability to utilize code interpreters for data science tasks. Our evaluation framework includes an evaluation dataset and two evaluation modes. The evaluation dataset is constructed using an LLM-human cooperative approach and simulates an authentic workflow by leveraging consecutive and interactive IPython sessions. The two evaluation modes assess LLMs' ability with and without human assistance. We conduct extensive experiments to analyze the ability of 24 LLMs on CIBench and provide valuable insights for future LLMs in code interpreter utilization.

🛠️ Preparations

CIBench is evaluated based on OpenCompass. Please first install opencompass.

conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
pip install requirements/agent.txt

Then,

cd ..
git clone https://github.com/open-compass/CIBench.git
cd CIBench

move the cibench_eval to the opencompass/config

💾 Test Data

You can download the CIBench from here.

Then, unzip the dataset and place the dataset in OpenCompass/data. The data path should be like OpenCompass/data/cibench_dataset/cibench_{generation or template}.

Finally, using the following scripts to download the nessceary data.

cd OpenCompass/data/cibench_dataset
sh collect_datasources.sh

🤗 HuggingFace Models

  1. Download the huggingface model to your local path.
  1. Run the model with the following scripts in the opencompass dir.
python run.py config/cibench_eval/eval_cibench_hf.py

Note that the currently accelerator config (-a lmdeploy) doesnot support CodeAgent model. If you want to use lmdeploy to acclerate the evaluation, please refer to lmdeploy_internlm2_chat_7b to write the model config by yourself.

💫 Final Results

Once you finish all tested samples, you can check the results in outputs/cibench.

Note that the output images will be saved in output_images.

📊 Benchmark Results

More detailed and comprehensive benchmark results can refer to 🏆 CIBench official leaderboard !

❤️ Acknowledgements

CIBench is built with Lagent and OpenCompass. Thanks for their awesome work!

💳 License

This project is released under the Apache 2.0 license.

About

Official Repo of "CIBench: Evaluation of LLMs as Code Interpreter "

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages