A commentary of GPT-3 in MIT Technology Review 2021

被引:81
|
作者
Zhang, Min [1 ]
Li, Juntao [2 ]
机构
[1] Soochow Univ, Res Ctr Human Language Technol, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
[2] Soochow Univ, Inst Artificial Intelligence, Suzhou 215006, Peoples R China
来源
FUNDAMENTAL RESEARCH | 2021年 / 1卷 / 06期
关键词
D O I
10.1016/j.fmre.2021.11.011
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Through the development of large-scale natural language models with writing and dialogue capabilities, artificial intelligence (AI) has taken a significant stride towards better natural language understanding (NLU) and human-computer interaction (HCI). As of today, the GPT-3 model, developed by OpenAI, is the language model with the most parameters, the largest scale, and the strongest capabilities. Using a large amount of Internet text data and thousands of books for model training, GPT-3 can imitate the natural language patterns of humans nearly perfectly. This language model is extremely realistic and is considered the most impressive model as of today. Despite its powerful modeling and description capabilities, there are significant issues and limitations. First and foremost, the GPT-3 model does not understand writing (natural language generation) well and sometimes generates uncontrollable content. Secondly, training the GPT-3 model requires a large amount of computing power, data, and capital investment, and releases significant carbon dioxide emissions. Developing similar models is only possible in laboratories with adequate resources. Furthermore, as the GPT-3 model is trained with Internet text data rife with error messages and prejudices, it often produces chapters and paragraphs with biased content similar to the training data.
引用
收藏
页码:831 / 833
页数:3
相关论文
共 50 条
  • [1] A commentary of green hydrogen in MIT Technology Review 2021
    Gong, Jinlong
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 848 - 850
  • [2] A commentary of Data trusts in MIT Technology Review 2021
    Zhang, Xiaosong
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 834 - 835
  • [3] A commentary of Digital contact tracing in MIT Technology Review 2021
    Cheng, Xueqi
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 838 - 839
  • [4] A commentary of Remote everything in MIT Technology Review 2021 Comment
    Cong, Yang
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 842 - 843
  • [5] A commentary of Messenger RNA vaccines in MIT Technology Review 2021
    Qi, Hai
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 829 - 830
  • [6] A commentary of Lithium-metal batteries in MIT Technology Review 2021
    Zhang, Qiang
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 836 - 837
  • [7] A commentary of Hyper-accurate position in MIT Technology Review 2021
    Ren, Xia
    Yang, Yuanxi
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 840 - 841
  • [8] A commentary of TikTok recommendation algorithms in MIT Technology Review 2021 Comment
    Zhang, Min
    Liu, Yiqun
    FUNDAMENTAL RESEARCH, 2021, 1 (06): : 846 - 847
  • [9] GPT-3: What's it good for?
    Dale, Robert
    NATURAL LANGUAGE ENGINEERING, 2021, 27 (01) : 113 - 118
  • [10] Is GPT-3 a Good Data Annotator?
    Ding, Bosheng
    Qin, Chengwei
    Liu, Linlin
    Chia, Yew Ken
    Li, Boyang
    Joty, Shafiq
    Bing, Lidong
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 11173 - 11195