Evaluating causal psychological models: A study of language theories of autism using a large sample

被引:4
|
作者
Tang, Bohao [1 ]
Levine, Michael [2 ]
Adamek, Jack H. [2 ]
Wodka, Ericka L. [2 ,3 ]
Caffo, Brian S. [1 ]
Ewen, Joshua B. [2 ,3 ,4 ]
机构
[1] Johns Hopkins Univ, Bloomberg Sch Publ Hlth, Baltimore, MD USA
[2] Kennedy Krieger Inst, Baltimore, MD 21205 USA
[3] Johns Hopkins Univ, Sch Med, Baltimore, MD 21218 USA
[4] Kennedy Krieger Inst, Neurol & Dev Med, Baltimore, MD 21205 USA
来源
FRONTIERS IN PSYCHOLOGY | 2023年 / 14卷
关键词
language; social withdrawal; autism (ASD); psychological theory; large data analysis; causal inference; network analysis; INFANTILE-AUTISM; SPECTRUM DISORDERS; BROADER PHENOTYPE; DEFICITS; CHILDHOOD; CHILDREN; SCHIZOPHRENIA; EXPLANATION; IMPAIRMENTS; BEHAVIOR;
D O I
10.3389/fpsyg.2023.1060525
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
We used a large convenience sample (n = 22,223) from the Simons Powering Autism Research (SPARK) dataset to evaluate causal, explanatory theories of core autism symptoms. In particular, the data-items collected supported the testing of theories that posited altered language abilities as cause of social withdrawal, as well as alternative theories that competed with these language theories. Our results using this large dataset converge with the evolution of the field in the decades since these theories were first proposed, namely supporting primary social withdrawal (in some cases of autism) as a cause of altered language development, rather than vice versa.To accomplish the above empiric goals, we used a highly theory-constrained approach, one which differs from current data-driven modeling trends but is coherent with a very recent resurgence in theory-driven psychology. In addition to careful explication and formalization of theoretical accounts, we propose three principles for future work of this type: specification, quantification, and integration. Specification refers to constraining models with pre-existing data, from both outside and within autism research, with more elaborate models and more veridical measures, and with longitudinal data collection. Quantification refers to using continuous measures of both psychological causes and effects, as well as weighted graphs. This approach avoids "universality and uniqueness" tests that hold that a single cognitive difference could be responsible for a heterogeneous and complex behavioral phenotype. Integration of multiple explanatory paths within a single model helps the field examine for multiple contributors to a single behavioral feature or to multiple behavioral features. It also allows integration of explanatory theories across multiple current-day diagnoses and as well as typical development.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] An Exploratory Study on Using Large Language Models for Mutation Testing
    Wang, Bo
    Chen, Mingda
    Lin, Youfang
    Papadakis, Mike
    Zhang, Jie M.
    arXiv,
  • [42] Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study
    Hamalainen, Perttu
    Tavast, Mikke
    Kunnari, Anton
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2023, 2023,
  • [43] Evaluating large language models on geospatial tasks: a multiple geospatial task benchmarking study
    Xu, Liuchang
    Zhao, Shuo
    Lin, Qingming
    Chen, Luyao
    Luo, Qianqian
    Wu, Sensen
    Ye, Xinyue
    Feng, Hailin
    Du, Zhenhong
    INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2025, 18 (01)
  • [44] Large Language Models are Temporal and Causal Reasoners for Video Question Answering
    Ko, Dohwan
    Lee, Ji Soo
    Kang, Wooyoung
    Roh, Byungseok
    Kim, Hyunwoo J.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4300 - 4316
  • [45] Are Large Language Models Capable of Causal Reasoning for Sensing Data Analysis?
    Hu, Zhizhang
    Zhang, Yue
    Rossi, Ryan
    Yu, Tong
    Kim, Sungchul
    Pan, Shijia
    PROCEEDINGS OF THE 2024 WORKSHOP ON EDGE AND MOBILE FOUNDATION MODELS, EDGEFM 2024, 2024, : 24 - 29
  • [46] Does Metacognitive Prompting Improve Causal Inference in Large Language Models?
    Ohtani, Ryusei
    Sakurai, Yuko
    Oyama, Satoshi
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 458 - 459
  • [47] Exploring Synergies between Causal Models and Large Language Models for Enhanced Understanding and Inference
    Sun, Yaru
    Yang, Ying
    Fu, Wenhao
    2024 2ND ASIA CONFERENCE ON COMPUTER VISION, IMAGE PROCESSING AND PATTERN RECOGNITION, CVIPPR 2024, 2024,
  • [48] Causal-Guided Active Learning for Debiasing Large Language Models
    Sun, Zhouhao
    Li Du
    Ding, Xiao
    Ma, Yixuan
    Zhao, Yang
    Qiu, Kaitao
    Liu, Ting
    Qin, Bing
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 14455 - 14469
  • [49] The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code
    Liu, Xiao
    Yin, Da
    Zhang, Chen
    Feng, Yansong
    Zhao, Dongyan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 9009 - 9022
  • [50] Verifying Theories of Language Acquisition Using Computer Models of Language Evolution
    Vogt, Paul
    Lieven, Elena
    ADAPTIVE BEHAVIOR, 2010, 18 (01) : 21 - 35