Foundation and large language models: fundamentals, challenges, opportunities, and social impacts

被引:0
|
作者
Devon Myers
Rami Mohawesh
Venkata Ishwarya Chellaboina
Anantha Lakshmi Sathvik
Praveen Venkatesh
Yi-Hui Ho
Hanna Henshaw
Muna Alhawawreh
David Berdik
Yaser Jararweh
机构
[1] Duquesne University,
[2] Al Ain University,undefined
[3] Deakin University,undefined
来源
Cluster Computing | 2024年 / 27卷
关键词
Natural language processing; Foundation models; Large language models; Advanced pre-trained models; Artificial intelligence; Machine learning;
D O I
暂无
中图分类号
学科分类号
摘要
Foundation and Large Language Models (FLLMs) are models that are trained using a massive amount of data with the intent to perform a variety of downstream tasks. FLLMs are very promising drivers for different domains, such as Natural Language Processing (NLP) and other AI-related applications. These models emerged as a result of the AI paradigm shift, involving the use of pre-trained language models (PLMs) and extensive data to train transformer models. FLLMs have also demonstrated impressive proficiency in addressing a wide range of NLP applications, including language generation, summarization, comprehension, complex reasoning, and question answering, among others. In recent years, there has been unprecedented interest in FLLMs-related research, driven by contributions from both academic institutions and industry players. Notably, the development of ChatGPT, a highly capable AI chatbot built around FLLMs concepts, has garnered considerable interest from various segments of society. The technological advancement of large language models (LLMs) has had a significant influence on the broader artificial intelligence (AI) community, potentially transforming the processes involved in the development and use of AI systems. Our study provides a comprehensive survey of existing resources related to the development of FLLMs and addresses current concerns, challenges and social impacts. Moreover, we emphasize on the current research gaps and potential future directions in this emerging and promising field.
引用
收藏
页码:1 / 26
页数:25
相关论文
共 50 条
  • [31] Large language models for qualitative research in software engineering: exploring opportunities and challenges
    Bano, Muneera
    Hoda, Rashina
    Zowghi, Didar
    Treude, Christoph
    AUTOMATED SOFTWARE ENGINEERING, 2024, 31 (01)
  • [32] The role of large language models in interdisciplinary research: Opportunities, challenges and ways forward
    Mammides, Christos
    Papadopoulos, Harris
    METHODS IN ECOLOGY AND EVOLUTION, 2024, 15 (10): : 1774 - 1776
  • [33] On Opportunities and Challenges of Large Language Models and GPT for Problem Solving and TRIZ Education
    Avogadri, Simone
    Russo, Davide
    WORLD CONFERENCE OF AI-POWERED INNOVATION AND INVENTIVE DESIGN, PT I, TFC 2024, 2025, 735 : 193 - 204
  • [34] Factuality challenges in the era of large language models and opportunities for fact-checking
    Augenstein, Isabelle
    Baldwin, Timothy
    Cha, Meeyoung
    Chakraborty, Tanmoy
    Ciampaglia, Giovanni Luca
    Corney, David
    Diresta, Renee
    Ferrara, Emilio
    Hale, Scott
    Halevy, Alon
    Hovy, Eduard
    Ji, Heng
    Menczer, Filippo
    Miguez, Ruben
    Nakov, Preslav
    Scheufele, Dietram
    Sharma, Shivam
    Zagni, Giovanni
    NATURE MACHINE INTELLIGENCE, 2024, 6 (08) : 852 - 863
  • [35] Large circuit models: opportunities and challenges
    Lei CHEN
    Yiqi CHEN
    Zhufei CHU
    Wenji FANG
    TsungYi HO
    Ru HUANG
    Yu HUANG
    Sadaf KHAN
    Min LI
    Xingquan LI
    Yu LI
    Yun LIANG
    Jinwei LIU
    Yi LIU
    Yibo LIN
    Guojie LUO
    Hongyang PAN
    Zhengyuan SHI
    Guangyu SUN
    Dimitrios TSARAS
    Runsheng WANG
    Ziyi WANG
    Xinming WEI
    Zhiyao XIE
    Qiang XU
    Chenhao XUE
    Junchi YAN
    Jun YANG
    Bei YU
    Mingxuan YUAN
    Evangeline FYYOUNG
    Xuan ZENG
    Haoyi ZHANG
    Zuodong ZHANG
    Yuxiang ZHAO
    HuiLing ZHEN
    Ziyang ZHENG
    Binwu ZHU
    Keren ZHU
    Sunan ZOU
    Science China(Information Sciences), 2024, 67 (10) : 25 - 66
  • [36] Large circuit models: opportunities and challenges
    Chen, Lei
    Chen, Yiqi
    Chu, Zhufei
    Fang, Wenji
    Ho, Tsung-Yi
    Huang, Ru
    Huang, Yu
    Khan, Sadaf
    Li, Min
    Li, Xingquan
    Li, Yu
    Liang, Yun
    Liu, Jinwei
    Liu, Yi
    Lin, Yibo
    Luo, Guojie
    Pan, Hongyang
    Shi, Zhengyuan
    Sun, Guangyu
    Tsaras, Dimitrios
    Wang, Runsheng
    Wang, Ziyi
    Wei, Xinming
    Xie, Zhiyao
    Xu, Qiang
    Xue, Chenhao
    Yan, Junchi
    Yang, Jun
    Yu, Bei
    Yuan, Mingxuan
    Young, Evangeline F. Y.
    Zeng, Xuan
    Zhang, Haoyi
    Zhang, Zuodong
    Zhao, Yuxiang
    Zhen, Hui-Ling
    Zheng, Ziyang
    Zhu, Binwu
    Zhu, Keren
    Zou, Sunan
    SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (10)
  • [37] On the Opportunities and Challenges of Foundation Models for GeoAI (Vision Paper)
    Mai, Gengchen
    Huang, Weiming
    Sun, Jin
    Song, Suhang
    Mishra, Deepak
    Liu, Ninghao
    Gao, Song
    Liu, Tianming
    Cong, Gao
    Hu, Yingjie
    Cundy, Chris
    Li, Ziyuan
    Zhu, Rui
    Lao, Ni
    ACM TRANSACTIONS ON SPATIAL ALGORITHMS AND SYSTEMS, 2024, 10 (02)
  • [38] Foundation models in smart agriculture: Basics, opportunities, and challenges
    Li, Jiajia
    Xu, Mingle
    Xiang, Lirong
    Chen, Dong
    Zhuang, Weichao
    Yin, Xunyuan
    Li, Zhaojian
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 222
  • [39] Foundation Models, Generative AI, and Large Language Models
    Ross, Angela
    McGrow, Kathleen
    Zhi, Degui
    Rasmy, Laila
    CIN-COMPUTERS INFORMATICS NURSING, 2024, 42 (05) : 377 - 387
  • [40] Integration of Advanced Large Language Models into the Construction of Adverse Outcome Pathways: Opportunities and Challenges
    Shi, Haochun
    Zhao, Yanbin
    ENVIRONMENTAL SCIENCE & TECHNOLOGY, 2024, 58 (35) : 15355 - 15358