Bibliography#

[[Ahn et al., 2020]]

Seoyoung Ahn, Conor Kelton, Aruna Balasubramanian, and Greg Zelinsky. Towards predicting reading comprehension from gaze behavior. In ACM Symposium on Eye Tracking Research and Applications, ETRA '20 Short Papers. New York, NY, USA, 2020. Association for Computing Machinery. URL: https://doi.org/10.1145/3379156.3391335, doi:10.1145/3379156.3391335.

[[Alacam et al., 2024]]

Özge Alacam, Sanne Hoeken, and Sina Zarrieß. Eyes don`t lie: subjective hate annotation and detection with gaze. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 187–205. Miami, Florida, USA, November 2024. Association for Computational Linguistics. URL: https://aclanthology.org/2024.emnlp-main.11/, doi:10.18653/v1/2024.emnlp-main.11.

[[Alakmeh et al., 2024]]

Tarek Alakmeh, David Reich, Lena Jäger, and Thomas Fritz. Predicting code comprehension: a novel approach to align human gaze with code using deep neural networks. Proc. ACM Softw. Eng., jul 2024. URL: https://doi.org/10.1145/3660795, doi:10.1145/3660795.

[[Alexander et al., 2017]]

Lindsay M. Alexander, Jasmine Escalera, Lei Ai, Charissa Andreotti, Karina Febre, Alexander Mangone, Natan Vega-Potler, Nicolas Langer, Alexis Alexander, Meagan Kovacs, Shannon Litke, Bridget O'Hagan, Jennifer Andersen, Batya Bronstein, Anastasia Bui, Marijayne Bushey, Henry Butler, Victoria Castagna, Nicolas Camacho, Elisha Chan, Danielle Citera, Jon Clucas, Samantha Cohen, Sarah Dufek, Megan Eaves, Brian Fradera, Judith Gardner, Natalie Grant-Villegas, Gabriella Green, Camille Gregory, Emily Hart, Shana Harris, Megan Horton, Danielle Kahn, Katherine Kabotyanski, Bernard Karmel, Simon P. Kelly, Kayla Kleinman, Bonhwang Koo, Eliza Kramer, Elizabeth Lennon, Catherine Lord, Ginny Mantello, Amy Margolis, Kathleen R. Merikangas, Judith Milham, Giuseppe Minniti, Rebecca Neuhaus, Alexandra Levine, Yael Osman, Lucas C. Parra, Ken R. Pugh, Amy Racanello, Anita Restrepo, Tian Saltzman, Batya Septimus, Russell Tobe, Rachel Waltz, Anna Williams, Anna Yeo, Francisco X. Castellanos, Arno Klein, Tomas Paus, Bennett L. Leventhal, R. Cameron Craddock, Harold S. Koplewicz, and Michael P. Milham. An open resource for transdiagnostic research in pediatric mental health and learning disorders. Scientific data, 4(1):1–26, 2017. URL: https://www.nature.com/articles/sdata2017181, doi:https://doi.org/10.1038/sdata.2017.181.

[[Berzak et al., 2025]]

Yevgeni Berzak, Jonathan Malmaud, Omer Shubi, Yoav Meiri, Ella Lion, and Roger Levy. Onestop: a 360-participant english eye tracking dataset with different reading regimes. PsyArXiv preprint, 2025. URL: https://osf.io/preprints/psyarxiv/kgxv5_v2, doi:https://doi.org/10.31234/osf.io/kgxv5_v2.

[[Bj{\"o}rnsd{\'o}ttir et al., 2023]]

Marina Björnsdóttir, Nora Hollenstein, and Maria Barrett. Dyslexia prediction from natural reading of Danish texts. In Tanel Alumäe and Mark Fishel, editors, Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), 60–70. Tórshavn, Faroe Islands, May 2023. University of Tartu Library. URL: https://aclanthology.org/2023.nodalida-1.7.

[[Bolliger et al., 2024]]

Lena Sophia Bolliger, Patrick Haller, Isabelle Caroline Rose Cretton, David Robert Reich, Tannon Kew, and Lena Ann Jäger. Emtec: a corpus of eye movements on machine-generated texts. 2024. URL: https://arxiv.org/abs/2408.04289, arXiv:2408.04289.

[[Bondar et al., 2025]]

Anna Bondar, David R. Reich, and Lena A. Jäger. Colagaze: a corpus of eye movements for linguistic acceptability. In 2025 Symposium on Eye Tracking Research and Applications, ETRA '25. New York, NY, USA, 2025. Association for Computing Machinery. doi:10.1145/3715669.3723120.

[[Coutrot et al., 2016]]

Antoine Coutrot, Nicola Binetti, Charlotte Harrison, Isabelle Mareschal, and Alan Johnston. Face exploration dynamics differentiate men and women. Journal of Vision, 16(14):16–16, 11 2016. URL: https://doi.org/10.1167/16.14.16, doi:10.1167/16.14.16.

[[Engbert et al., 2003]]

Ralf Engbert and Reinhold Kliegl. Microsaccades uncover the orientation of covert attention. Vision Research, 43(9):1035–1045, 2003. doi:10.1016/S0042-6989(03)00084-1.

[[Engbert et al., 2015]]

Ralf Engbert, Petra Sinn, Konstantin Mergenthaler, and Hans Trukenbrod. Microsaccade toolbox 0.9. 2015. URL: http://read.psych.uni-potsdam.de/index.php?view=article&id=140.

[[Frank et al., 2013]]

Stefan L. Frank, Irene Fernandez Monsalve, Robin L. Thompson, and Gabriella Vigliocco. Reading time data for evaluating broad-coverage models of English sentence processing. Behavior Research Methods, 2013. doi:10.3758/s13428-012-0313-y.

[[Griffith et al., 2021]]

Henry Griffith, Dillon Lohr, Evgeny Abdulin, and Oleg Komogortsev. GazeBase, a large-scale, multi-stimulus, longitudinal eye movement dataset. Scientific Data, 8:, 2021. doi:10.1038/s41597-021-00959-y.

[[Hollenstein et al., 2022]]

Nora Hollenstein, Maria Barrett, and Marina Björnsdóttir. The Copenhagen corpus of eye tracking recordings from natural reading of Danish texts. In Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Thirteenth Language Resources and Evaluation Conference, 1712–1720. Marseille, France, June 2022. European Language Resources Association. URL: https://aclanthology.org/2022.lrec-1.182.

[[Jakobi et al., 2024]]

Deborah N. Jakobi, Thomas Kern, David R. Reich, Patrick Haller, and Lena A. Jäger. PoTeC: a German naturalistic eye-tracking-while-reading corpus. 2024. under review. URL: DiLi-Lab/PoTeC.

[[Kuperman et al., 2025]]

Victor Kuperman, Sascha Schroeder, Cengiz Acartürk, Niket Agrawal, Dominick M. Alexandre, Lena S. Bolliger, Jan Brasser, César Campos-Rojas, Denis Drieghe, Dušica Filipović Đurđević, and et al. New data on text reading in english as a second language: the wave 2 expansion of the multilingual eye-movement corpus (meco). Studies in Second Language Acquisition, pages 1–19, 2025. doi:10.1017/S0272263125000105.

[[Kuperman et al., 2023]]

Victor Kuperman, Noam Siegelman, Sascha Schroeder, Cengiz Acartürk, Svetlana Alexeeva, Simona Amenta, Raymond Bertram, Rolando Bonandrini, Marc Brysbaert, Daria Chernova, and et al. Text reading in english as a second language: evidence from the multilingual eye-movements corpus. Studies in Second Language Acquisition, 45(1):3–37, 2023. doi:10.1017/S0272263121000954.

[[Lan et al., 2020]]

Guohao Lan, Bailey Heit, Tim Scargill, and Maria Gorlatova. GazeGraph: graph-based few-shot cognitive context sensing from human visual behavior. In Proceedings of the 18th Conference on Embedded Networked Sensor Systems (SenSys), SenSys '20, 422–435. New York, NY, USA, 2020. Association for Computing Machinery. URL: https://doi.org/10.1145/3384419.3430774, doi:10.1145/3384419.3430774.

[[Lohr et al., 2023]]

Dillon Lohr, Samantha Aziz, Lee Friedman, and Oleg Komogortsev. GazeBaseVR, a large-scale, longitudinal, binocular eye-tracking dataset collected in virtual reality. Scientific Data, 10:, 2023. doi:10.1038/s41597-023-02075-5.

[[Luke et al., 2018]]

Steven G. Luke and Kiel Christianson. The Provo Corpus: a large eye-tracking corpus with predictability norms. Behavior Research Methods, 50(2):826–833, 2018. URL: https://link.springer.com/article/10.3758/s13428-017-0908-4, doi:https://doi.org/10.3758/s13428-017-0908-4.

[[Maharaj et al., 2023]]

Kishan Maharaj, Ashita Saxena, Raja Kumar, Abhijit Mishra, and Pushpak Bhattacharyya. Eyes show the way: modelling gaze behaviour for hallucination detection. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, 11424–11438. Singapore, December 2023. Association for Computational Linguistics. URL: https://aclanthology.org/2023.findings-emnlp.764/, doi:10.18653/v1/2023.findings-emnlp.764.

[[Makowski et al., 2020]]

Silvia Makowski, Lena A. Jäger, Paul Prasse, and Tobias Scheffer. JuDo1000 eye tracking data set. 2020. doi:10.17605/OSF.IO/5ZPVK.

[[Nahatame et al., 2024]]

Shingo Nahatame, Tomoko Ogiso, Yukino Kimura, and Yuji Ushiro. Teco: an eye-tracking corpus of japanese l2 english learners’ text reading. Research Methods in Applied Linguistics, 3(2):100123, 2024. URL: https://www.sciencedirect.com/science/article/pii/S2772766124000296, doi:https://doi.org/10.1016/j.rmal.2024.100123.

[[Pan et al., 2022]]

Jinger Pan, Ming Yan, Eike M. Richter, Hua Shu, and Reinhold Kliegl. The Beijing Sentence Corpus: a Chinese sentence corpus with eye movement data and predictability norms. Behavior Research Methods, 2022. URL: https://link.springer.com/article/10.3758/s13428-021-01730-2, doi:https://doi.org/10.3758/s13428-021-01730-2.

[[Pantanowitz et al., 2021]]

Adam Pantanowitz, Kimoon Kim, Chelsey Chewins, and David M Rubin. Gaze tracking dataset for comparison of smooth and saccadic eye tracking. Data in Brief, 34:106730, 2021. URL: https://www.sciencedirect.com/science/article/pii/S2352340921000160, doi:https://doi.org/10.1016/j.dib.2021.106730.

[[Prasse et al., 2025]]

Paul Prasse, David R. Reich, Jakob Chwastek, Silvia Makowski, Lena A. Jäger, and Tobias Scheffer. Detection of alcohol inebriation from eye movements using remote and wearable eye trackers. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications. 2025. doi:10.1145/3715669.3723109.

[[Reich et al., 2024]]

David R. Reich, Shuwen Deng, Marina Björnsdóttir, Lena Jäger, and Nora Hollenstein. Reading does not equal reading: comparing, simulating and exploiting reading behavior across populations. In Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, and Nianwen Xue, editors, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 13586–13594. Torino, Italia, May 2024. ELRA and ICCL. URL: https://aclanthology.org/2024.lrec-main.1187.

[[Salvucci et al., 2000]]

Dario D. Salvucci and Joseph H. Goldberg. Identifying fixations and saccades in eye-tracking protocols. Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, pages 71–78, 2000. doi:10.1145/355017.355028.

[[Schwetlick et al., 2024]]

Lisa Schwetlick, Matthias Kümmerer, Matthias Bethge, and Ralf Engbert. Potsdam data set of eye movement on natural scenes (daemons). Frontiers in Psychology, 2024. URL: https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1389609, doi:10.3389/fpsyg.2024.1389609.

[[Sedmidubsky et al., 2025]]

Jan Sedmidubsky, Nicol Dostalova, Roman Svaricek, and Wolf Culemann. Etdd70: eye-tracking dataset for classification of dyslexia using ai-based methods. In Edgar Chávez, Benjamin Kimia, Jakub Lokoč, Marco Patella, and Jan Sedmidubsky, editors, Similarity Search and Applications, 34–48. Cham, 2025. Springer Nature Switzerland. URL: https://link.springer.com/chapter/10.1007/978-3-031-75823-2_3, doi:https://doi.org/10.1007/978-3-031-75823-2_3.

[[Siegelman et al., 2022]]

Noam Siegelman, Sascha Schroeder, Cengiz Acartürk, Hee-Don Ahn, Svetlana Alexeeva, Simona Amenta, Raymond Bertram, Rolando Bonandrini, Marc Brysbaert, Daria Chernova, and others. Expanding horizons of cross-linguistic research on reading: The Multilingual Eye-movement Corpus (MECO). Behavior research methods, 54(6):2843–2863, 2022. doi:10.3758/s13428-021-01772-6.

[[Sümer et al., 2021]]

Ömer Sümer, Efe Bozkir, Thomas Kübler, Sven Grüner, Sonja Utz, and Enkelejda Kasneci. Fakenewsperception: an eye movement dataset on the perceived believability of news stories. Data in Brief, 35:106909, 03 2021. doi:10.1016/j.dib.2021.106909.

[[Miltenburg et al., 2018]]

Emiel van Miltenburg, Ákos Kádár, Ruud Koolen, and Emiel Krahmer. DIDEC: the Dutch image description and eye-tracking corpus. In Emily M. Bender, Leon Derczynski, and Pierre Isabelle, editors, Proceedings of the 27th International Conference on Computational Linguistics, 3658–3669. Santa Fe, New Mexico, USA, August 2018. Association for Computational Linguistics. URL: https://aclanthology.org/C18-1310.

[[Yan et al., 2025]]

Ming Yan, Jinger Pan, and Reinhold Kliegl. The Beijing Sentence Corpus II: a cross-script comparison between traditional and simplified chinese sentence reading. Behavior Research Methods, 2025. URL: https://link.springer.com/article/10.3758/s13428-024-02523-z, doi:https://doi.org/10.3758/s13428-024-02523-z.

[[Zermiani et al., 2024]]

Francesca Zermiani, Prajit Dhar, Ekta Sood, Fabian Kögel, Andreas Bulling, and Maria Wirzberger. InteRead: an eye tracking dataset of interrupted reading. In Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, and Nianwen Xue, editors, Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 9154–9169. Torino, Italia, May 2024. ELRA and ICCL. URL: https://aclanthology.org/2024.lrec-main.802.

[[Zhang et al., 2022]]

Guangyao Zhang, Panpan Yao, Guojie Ma, Jingwen Wang, Junyi Zhou, Linjieqiong Huang, Pingping Xu, Lijing Chen, Songlin Chen, Junjuan Gu, Wei Wei, Xi Cheng, Huimin Hua, Pingping Liu, Ya Lou, Wei Shen, Yaqian Bao, Jiayu Liu, Nan Lin, and Xingshan Li. The database of eye-movement measures on words in Chinese reading. Scientific Data, 9(1):1–8, 2022. doi:10.1038/s41597-022-01464-6.