I'm trying to reproduce the BLEU score evaluation for the code documentation generation task described in your work. The script bleu.py seems to require a reference file as input (e.g., ground-truth documentation texts), but I don't see this file in the repository or documentation. Could you please:
- Specify how to obtain/generate this reference file?
2.Clarify its expected format (e.g., line-aligned with model outputs)?