Skip to content

Lum1104/EIBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🌟 Why We Feel: Breaking Boundaries in Emotional Reasoning with Multimodal Large Language Models

arXiv badge

eibench

Tip

Check out our new project MER-Factory, an automated factory for building Multimodal Emotion Recognition and Reasoning datasets!

📦 Prerequisites

To get started with EIBench, you'll need to download and prepare the following datasets:

After downloading, unzip these datasets and place them in the datasets folder in your project directory.

🛠️ Setup & Usage

To use the EIBench dataset and benchmark in your project:

  1. Clone this repository:
git clone https://github.com/Lum1104/EIBench.git
  1. Navigate to the directory:
cd EIBench
  1. Run the example baseline code and test your own models.

For each baseline model, please install the required environment as needed:

# Basic EIBench
python EIBench/baselines/qwen/qwen_user.py --model-path Qwen/Qwen-VL-Chat --input-json EIBench/EI_Basic/user.jsonl --output-json EIBench/EI_Basic/qwen_basic.jsonl --image-path datasets/
# Complex EIBench
python EIBench/baselines/qwen/qwen_complex.py --model-path Qwen/Qwen-VL-Chat --input-json EI_Complex/ei_complex.jsonl --output-json EIBench/EI_Complex/qwen_complex.jsonl --image-path datasets/
  1. Get evaluate results by LLaMA-3/ChatGPT-3.5

Here is the script for LLaMA-3 evaluation.

# Basic EIBench
cd EIBench/EI_Basic/
python llama3-eval.py --model-id meta-llama/Meta-Llama-3-8B-Instruct --ec-data-file qwen_basic.jsonl --gt-file basic_ground_truth.json --output-file qwen_basic_scores_llama3.jsonl
python get_scores.py --file-path qwen_basic_llama3_scores.jsonl
# Complex EIBench
cd EIBench/EI_Complex/
python llama3-eval-complex.py --ec-data-file qwen_complex.jsonl --gt-file ei_complex.jsonl --output-file qwen_complex_llama3_scores.jsonl --model-id meta-llama/Meta-Llama-3-8B-Instruct

Here is the script for ChatGPT-3.5 evaluation. Prepare your api key and write it in the variable OpenAI(api_key="YOUR_API_KEY").

# Basic EIBench
cd EIBench/EI_Basic/
python gpt-eval.py --ec-data-file qwen_basic.jsonl --gt-file basic_ground_truth.json --output-file qwen_basic_scores_gpt.jsonl
python get_scores.py --file-path qwen_basic_gpt_scores.jsonl
# Complex EIBench
cd EIBench/EI_Complex/
python gpt-eval-complex.py --ec-data-file qwen_complex.jsonl --gt-file ei_complex.jsonl --output-file qwen_complex_gpt_scores.jsonl

We also provide evaluation code for Long-term Coherence. Please install the required packages:

pip install spacy
pip -m spacy download en_core_web_sm
cd EIBench/EI_Basic/
python long_term_scores.py --file-path path/to/ei_data.jsonl

Baselines

Close-source Models

# (gpt4o/gpt4v)
python gpt4-basic.py --ec-data-file path/to/user.jsonl --image-path path/to/dataset/ --output-file gpt4o_user.jsonl
python gpt4-score-complex.py --gt-file path/to/ei_complex.jsonl --image-path path/to/dataset/ --output-file gpt4o_complex.jsonl
# (Claude-3-haiku/Claude-3-sonnet)
python claude_basic.py --ec-data-file path/to/user.jsonl --image-path path/to/dataset/ --output-file claude_haiku_user.jsonl
python claude_complex.py --gt-file path/to/ei_complex.jsonl --image-path path/to/dataset/ --output-file claude_haiku_complex.jsonl
# qwen-vl-plus
python qwen_api_basic.py --ec-data-file path/to/user.jsonl --image-path path/to/datasets/ --output-file qwen_api_user.jsonl
python qwen_api_complex.py --gt-file path/to/ei_complex.jsonl --image-path path/to/dataset --output-file qwen_qpi_complex.jsonl

Open-source Models

Please follow the environment needed by each baseline models:

LLaVA

cd LLaVA
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .
# Input different LLaVA model to get the evaluation results.
python -m llava.serve.ei_basic_llava --model-path liuhaotian/llava-v1.6-34b --image-file path/to/user.jsonl --out-json llava34b_user.jsonl --image-path path/to/dataset/
python -m llava.serve.ei_complex_llava --model-path liuhaotian/llava-v1.6-34b --image-file path/to/ei_complex.jsonl --out-json llava34b_complex.jsonl --image-path path/to/dataset/

MiniGPT4-v2

cd MiniGPT4-v2
conda env create -f environment.yml
conda activate minigptv
# Modify MiniGPT4-v2/eval_configs/minigptv2_eval.yaml
python ei_basic_minigpt4v2.py --cfg-path eval_configs/minigptv2_eval.yaml  --gpu-id 0 --img-path path/to/user.jsonl --out-json minigpt4v2_user.jsonl --dataset-path path/to/dataset/
python ei_complex_minigpt4v2.py --cfg-path eval_configs/minigptv2_eval.yaml  --gpu-id 0 --img-path path/to/ei_complex.jsonl --out-json minigpt_complex.jsonl --dataset-path path/to/dataset/

Otter

cd Otter
conda env create -f environment.yml
conda activate otter
python ei_basic_otter.py --ec-data-file path/to/user.jsonl --image-path path/to/datasets/ --output-file otter_user.jsonl
python ei_complex_otter.py --gt-file path/to/ei_complex.jsonl --image-path path/to/dataset/ --output-file otter_complex.jsonl

Feel free to explore, contribute, and raise issues if you run into any trouble!

About

Why We Feel: Breaking Boundaries in Emotional Reasoning with Multimodal Large Language Models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published