Can You Give Me a Reason?: Argument-inducing Online Forum by Argument Mining

被引:4
|
作者
Ida, Makiko [1 ]
Morio, Gaku [1 ]
Iwasa, Kosui [1 ]
Tatsumi, Tomoyuki [1 ]
Yasui, Takaki [1 ]
Fujita, Katsuhide [1 ]
机构
[1] Tokyo Univ Agr & Technol, Fuchu, Tokyo, Japan
关键词
Argumentation mining; Online forum; Neural network;
D O I
10.1145/3308558.3314127
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This demonstration paper presents an argument-inducing online forum that stimulates participants with lack of premises for their claim in online discussions. The proposed forum provides its participants the following two subsystems: (1) Argument estimator for online discussions automatically generates a visualization of the argument structures in posts based on argument mining. The forum indicates structures such as claim-premise relations in real time by exploiting a state-of-the-art deep learning model. (2) Argument-inducing agent for online discussion (AIAD) automatically generates a reply post based on the argument estimator requesting further reasons to improve the argumentation of participants. Our experimental discussion demonstrates that the argument estimator can detect the argument structures from online discussions, and AIAD can induce premises from the participants. To the best of our knowledge, our argument-inducing online forum is the first approach to either visualize or request a real-time argument for online discussions. Our forum can be used to collect and induce claim-reasons pairs rather than only opinions to understand various lines of reasoning in online arguments such as civic discussions, online debates, and education objectives. The argument estimator code is available at https://github.com/EdoFrank/EMNLP2018-ArgMining-Morio and the demonstration video is available at https://youtu.be/T9fNJfneQV8.
引用
收藏
页码:3545 / 3549
页数:5
相关论文
共 50 条
  • [1] AMPERSAND: Argument Mining for PERSuAsive oNline Discussions
    Chakrabarty, Tuhin
    Hidey, Christopher
    Muresan, Smaranda
    Mckeown, Kathleen
    Hwang, Alyssa
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 2933 - 2943
  • [2] Can you give me a hand ?
    刘洁莉
    英语大王, 2012, (10) : 15 - 15
  • [3] Annotating Online Civic Discussion Threads for Argument Mining
    Morio, Gaku
    Fujita, Katsuhide
    2018 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE (WI 2018), 2018, : 546 - 553
  • [4] The godfather and philosophy: An argument you can't refute
    Oberrieder, Matthew
    JOURNAL OF POPULAR CULTURE, 2024, 57 (03): : 193 - 195
  • [5] 'Can you talk me through your argument'? Features of dialogic interaction in academic writing tutorials
    Wingate, Ursula
    JOURNAL OF ENGLISH FOR ACADEMIC PURPOSES, 2019, 38 : 25 - 35
  • [6] Quantifying and Incentivizing Exploration of Reputable Sources for Argument Formation in an Online Discussion Forum
    Square, L.
    Van der Heyde, V.
    Smith, D.
    ELECTRONIC JOURNAL OF E-LEARNING, 2021, 19 (03): : 209 - 219
  • [7] Give Me More Feedback: Annotating Argument Persuasiveness and Related Attributes in Student Essays
    Carlile, Winston
    Gurrapadi, Nishant
    Ke, Zixuan
    Ng, Vincent
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 621 - 631
  • [8] Exemplification Modeling: Can You Give Me an Example, Please?
    Barba, Edoardo
    Procopio, Luigi
    Lacerra, Caterina
    Pasini, Tommaso
    Navigli, Roberto
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3779 - 3785
  • [9] Listen Veronica! Can You Give Me a Hand With This Bug?
    Saenz, Juan Pablo
    De Russis, Luigi
    COMPANION OF THE 2023 ACM SIGCHI SYMPOSIUM ON ENGINEERING INTERACTIVE COMPUTING SYSTEMS, EICS 2023, 2023, : 24 - 30
  • [10] Three for me and none for you? An ethical argument for delaying COVID-19 boosters
    Jecker, Nancy S.
    Lederman, Zohar
    JOURNAL OF MEDICAL ETHICS, 2022, 48 (10) : 662 - 665