How Do Experts Read Application Letters? A Multi-Modal Study

被引:0
|
作者
Carter, Joyce Locke [1 ]
机构
[1] Texas Tech Univ, Lubbock, TX 79409 USA
来源
SIGDOC '12: PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON DESIGN OF COMMUNICATION | 2012年
关键词
eye-tracking; argumentation; persuasion; fixations;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Fourteen faculty participants each read two letters of application to a graduate program, and the data about how they read was collected using eye-tracking and think-aloud protocol. The eyetracking data show that expert readers not only "slow down" when they encounter grammatical and other errors, but also when they see words and phrases that match their program's mission or their own research interests. The think-aloud protocol data was used to verify eye-tracking results and also to allow for readers to expand on their impressions of the persuasiveness of a given letter. The project is not finished, but early impressions are that something akin to Kenneth Burke's concept of identification is a powerfully persuasive move in such letters-readers' eyes fixate on these identification moves and the participants identify those moves as positive and persuasive.
引用
收藏
页码:357 / 358
页数:2
相关论文
共 50 条
  • [21] Design and application of a multi-modal process tomography system
    Hoyle, BS
    Jia, X
    Podd, FJW
    Schlaberg, HI
    Tan, HS
    Wang, M
    West, RM
    Williams, RA
    York, TA
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2001, 12 (08) : 1157 - 1165
  • [23] A case study on multi-modal biometrics in the cloud
    Emersic, Z.
    Bule, J.
    Zganec-Gros, J.
    Struc, V
    Peer, P.
    ELEKTROTEHNISKI VESTNIK-ELECTROCHEMICAL REVIEW, 2014, 81 (03): : 74 - 80
  • [24] A case study on multi-modal biometrics in the cloud
    Emeršič, Žiga
    Bule, Jernej
    Žganec-Gros, Jerneja
    Štruc, Vitomir
    Peer, Peter
    Elektrotehniski Vestnik/Electrotechnical Review, 2014, 81 (03): : 74 - 80
  • [25] Multi-modal voice application design in a multi-client environment
    Ivanecky, J
    Klehr, M
    Fischer, V
    Kunzmann, S
    TEXT, SPEECH AND DIALOGUE, PROCEEDINGS, 2003, 2807 : 365 - 371
  • [26] Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image Fusion
    Cao, Bing
    Sun, Yiming
    Zhu, Pengfei
    Hu, Qinghua
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 23498 - 23507
  • [27] PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts
    Li, Yunshui
    Hui, Binyuan
    Yin, Zhichao
    Yang, Min
    Huang, Fei
    Li, Yongbin
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13402 - 13416
  • [28] Translational Application of a Neuro-Scientific Multi-Modal Approach Into Forensic Psychiatric Evaluation: Why and How?
    Scarpazza, Cristina
    Miolla, Alessio
    Zampieri, Ilaria
    Melis, Giulia
    Sartori, Giuseppe
    Ferracuti, Stefano
    Pietrini, Pietro
    FRONTIERS IN PSYCHIATRY, 2021, 12
  • [29] Application of Multi-modal Fusion Attention Mechanism in Semantic Segmentation
    Liu, Yunlong
    Yoshie, Osamu
    Watanabe, Hiroshi
    COMPUTER VISION - ACCV 2022, PT VII, 2023, 13847 : 378 - 397
  • [30] Multi-Modal Joint Clustering With Application for Unsupervised Attribute Discovery
    Liu, Liangchen
    Nie, Feiping
    Wiliem, Arnold
    Li, Zhihui
    Zhang, Teng
    Lovell, Brian C.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (09) : 4345 - 4356