Skip to content

Conversation

metterian
Copy link

Korean BertScore

Implementation of BERTScore for Korean text generation evaluation.

Key Features

  • Support for Korean-optimized pre-trained models
    • kykim/electra-kor-base
    • monologg/koelectra-base-v3-discriminator
    • beomi/KcELECTRA-base
    • Support for other multilingual models

Experiment

sheet

kikodde and others added 10 commits January 7, 2025 22:09
- Add AIHub Korean translation quality assessment dataset support
- Add Korean BERT models (kykim/bert-kor-base, klue/bert-base, etc.)
- Implement data preprocessing for AIHub dataset
- Add Korean font support for visualization
- Update gitignore for new data directories

This commit enables BERTScore evaluation on Korean translation pairs
(en-ko, ja-ko, zh-ko) using AIHub's translation quality assessment dataset
and various Korean BERT models.
This commit refactors the preprocess.py and tune_layers.py files.
@Tiiiger
Copy link
Owner

Tiiiger commented Apr 13, 2025

hi @metterian ,

Thank you for the contribution! Do you mind removing the preprocess file from tune_layers?

Otherwise, I am happy to merge it in

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants