Evaluating causal psychological models: A study of language theories of autism using a large sample

被引:4
|
作者
Tang, Bohao [1 ]
Levine, Michael [2 ]
Adamek, Jack H. [2 ]
Wodka, Ericka L. [2 ,3 ]
Caffo, Brian S. [1 ]
Ewen, Joshua B. [2 ,3 ,4 ]
机构
[1] Johns Hopkins Univ, Bloomberg Sch Publ Hlth, Baltimore, MD USA
[2] Kennedy Krieger Inst, Baltimore, MD 21205 USA
[3] Johns Hopkins Univ, Sch Med, Baltimore, MD 21218 USA
[4] Kennedy Krieger Inst, Neurol & Dev Med, Baltimore, MD 21205 USA
来源
FRONTIERS IN PSYCHOLOGY | 2023年 / 14卷
关键词
language; social withdrawal; autism (ASD); psychological theory; large data analysis; causal inference; network analysis; INFANTILE-AUTISM; SPECTRUM DISORDERS; BROADER PHENOTYPE; DEFICITS; CHILDHOOD; CHILDREN; SCHIZOPHRENIA; EXPLANATION; IMPAIRMENTS; BEHAVIOR;
D O I
10.3389/fpsyg.2023.1060525
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
We used a large convenience sample (n = 22,223) from the Simons Powering Autism Research (SPARK) dataset to evaluate causal, explanatory theories of core autism symptoms. In particular, the data-items collected supported the testing of theories that posited altered language abilities as cause of social withdrawal, as well as alternative theories that competed with these language theories. Our results using this large dataset converge with the evolution of the field in the decades since these theories were first proposed, namely supporting primary social withdrawal (in some cases of autism) as a cause of altered language development, rather than vice versa.To accomplish the above empiric goals, we used a highly theory-constrained approach, one which differs from current data-driven modeling trends but is coherent with a very recent resurgence in theory-driven psychology. In addition to careful explication and formalization of theoretical accounts, we propose three principles for future work of this type: specification, quantification, and integration. Specification refers to constraining models with pre-existing data, from both outside and within autism research, with more elaborate models and more veridical measures, and with longitudinal data collection. Quantification refers to using continuous measures of both psychological causes and effects, as well as weighted graphs. This approach avoids "universality and uniqueness" tests that hold that a single cognitive difference could be responsible for a heterogeneous and complex behavioral phenotype. Integration of multiple explanatory paths within a single model helps the field examine for multiple contributors to a single behavioral feature or to multiple behavioral features. It also allows integration of explanatory theories across multiple current-day diagnoses and as well as typical development.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Baby steps in evaluating the capacities of large language models
    Frank, Michael C.
    NATURE REVIEWS PSYCHOLOGY, 2023, 2 (08): : 451 - 452
  • [32] Evaluating the ability of large language models to emulate personality
    Wang, Yilei
    Zhao, Jiabao
    Ones, Deniz S.
    He, Liang
    Xu, Xin
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [33] Evaluating Large Language Models on Controlled Generation Tasks
    Sun, Jiao
    Tian, Yufei
    Zhou, Wangchunshu
    Xu, Nan
    Hu, Qian
    Gupta, Rahul
    Wieting, John
    Peng, Nanyun
    Ma, Xuezhe
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3155 - 3168
  • [34] Baby steps in evaluating the capacities of large language models
    Michael C. Frank
    Nature Reviews Psychology, 2023, 2 : 451 - 452
  • [35] EconNLI: Evaluating Large Language Models on Economics Reasoning
    Guo, Yue
    Yang, Yi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 982 - 994
  • [36] Evaluating Large Language Models for Tax Law Reasoning
    Cavalcante Presa, Joao Paulo
    Camilo Junior, Celso Goncalves
    Teles de Oliveira, Savio Salvarino
    INTELLIGENT SYSTEMS, BRACIS 2024, PT I, 2025, 15412 : 460 - 474
  • [37] A Chinese Dataset for Evaluating the Safeguards in Large Language Models
    Wang, Yuxia
    Zhai, Zenan
    Li, Haonan
    Han, Xudong
    Lin, Lizhi
    Zhang, Zhenxuan
    Zhao, Jingru
    Nakov, Preslav
    Baldwin, Timothy
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 3106 - 3119
  • [38] Evaluating large language models in analysing classroom dialogue
    Long, Yun
    Luo, Haifeng
    Zhang, Yu
    NPJ SCIENCE OF LEARNING, 2024, 9 (01)
  • [39] Evaluating large language models in theory of mind tasks
    Kosinski, Michal
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (45)
  • [40] DebugBench: Evaluating Debugging Capability of Large Language Models
    Tian, Runchu
    Ye, Yining
    Qin, Yujia
    Cong, Xin
    Lin, Yankai
    Pan, Yinxu
    Wu, Yesai
    Hui, Haotian
    Liu, Weichuan
    Liu, Zhiyuan
    Sun, Maosong
    Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2024, : 4173 - 4198