Skip to content

Advacheck-OU/ai-dataset-analysing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 

Repository files navigation

Are AI Detectors Good Enough? A Survey on Quality of Datasets With Machine-Generated Texts

arXiv LICENSE

German Gritsai¹ ² , Anastasia Voznyuk¹ , Andrey Grabovoy¹ , Yury Chekhovich¹

¹ Advacheck OÜ, Tallinn, Estonia, ² Universite Grenoble Alpes, Grenoble, France,

🛎️ TL;DR

We demonstrated that AI-detection shared tasks and research papers datasets may seem inadequate for evaluation of AI detectors, resulting in systematic errors that inflate the detectors' quality scores.

⛰️ Overview

The rapid development of autoregressive Large Language Models (LLMs) has significantly improved the quality of generated texts, necessitating reliable machine-generated text detectors. A huge number of detectors and collections with AI fragments have emerged, and several detection methods even showed recognition quality up to 99.9% according to the target metrics in such collections. However, the quality of such detectors tends to drop dramatically in the wild, posing a question: Are detectors actually highly trustworthy or do their high benchmark scores come from the poor quality of evaluation datasets? In this paper, we emphasise the need for robust and qualitative methods for evaluating generated data to be secure against bias and low generalising ability of future model. We present a systematic review of datasets from competitions dedicated to AI-generated content detection and propose methods for evaluating the quality of datasets containing AI-generated fragments. In addition, we discuss the possibility of using high-quality generated data to achieve two goals: improving the training of detection models and improving the training datasets themselves. Our contribution aims to facilitate a better understanding of the dynamics between human and machine text, which will ultimately support the integrity of information in an increasingly automated world.

📢 Updates

  • Jan 2025: 🎉 Our paper is accepted to AAAI 2025 Preventing and Detecting LLM Misinformation Workshop!
  • Oct 2024: Our code and preprint on arXiv are now available!

🚞 Analysed Datasets

Research Datasets Year Language
GPT-2 2019 en
TweepFake 2019 en
HC3 2023 en, zh
GhostBuster 2023 en
MGTBench 2024 en
MAGE 2024 en
M4 2024 en, zh, ru,
bg, ur, id
OutFox 2024 en
Shared Tasks Datasets Year Language
DAGPap22 2022 en
RuATD 2022 ru
AuTexTification 2023 en, es
IberAuTexTification 2024 es, en, ca,
gl, eu, pt
Voight-Kampff GenAI 2024 en
SemEval 2024 Task 8 2024 en, ar, de, it
GenAI Content Detection 2025 en, zh, it, ar, de,
ru, bg, ur, id

📚 Citation

If you find our code or ideas useful in your research, please cite our work as follows:

@misc{2024aidetectorsgoodenough,
      title={Are AI Detectors Good Enough? A Survey on Quality of Datasets With Machine-Generated Texts}, 
      author={German Gritsai and Anastasia Voznyuk and Andrey Grabovoy and Yury Chekhovich},
      year={2024},
      eprint={2410.14677},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.14677}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •