Foundation and large language models: fundamentals, challenges, opportunities, and social impacts

被引:0
|
作者
Devon Myers
Rami Mohawesh
Venkata Ishwarya Chellaboina
Anantha Lakshmi Sathvik
Praveen Venkatesh
Yi-Hui Ho
Hanna Henshaw
Muna Alhawawreh
David Berdik
Yaser Jararweh
机构
[1] Duquesne University,
[2] Al Ain University,undefined
[3] Deakin University,undefined
来源
Cluster Computing | 2024年 / 27卷
关键词
Natural language processing; Foundation models; Large language models; Advanced pre-trained models; Artificial intelligence; Machine learning;
D O I
暂无
中图分类号
学科分类号
摘要
Foundation and Large Language Models (FLLMs) are models that are trained using a massive amount of data with the intent to perform a variety of downstream tasks. FLLMs are very promising drivers for different domains, such as Natural Language Processing (NLP) and other AI-related applications. These models emerged as a result of the AI paradigm shift, involving the use of pre-trained language models (PLMs) and extensive data to train transformer models. FLLMs have also demonstrated impressive proficiency in addressing a wide range of NLP applications, including language generation, summarization, comprehension, complex reasoning, and question answering, among others. In recent years, there has been unprecedented interest in FLLMs-related research, driven by contributions from both academic institutions and industry players. Notably, the development of ChatGPT, a highly capable AI chatbot built around FLLMs concepts, has garnered considerable interest from various segments of society. The technological advancement of large language models (LLMs) has had a significant influence on the broader artificial intelligence (AI) community, potentially transforming the processes involved in the development and use of AI systems. Our study provides a comprehensive survey of existing resources related to the development of FLLMs and addresses current concerns, challenges and social impacts. Moreover, we emphasize on the current research gaps and potential future directions in this emerging and promising field.
引用
收藏
页码:1 / 26
页数:25
相关论文
共 50 条
  • [1] Foundation and large language models: fundamentals, challenges, opportunities, and social impacts
    Myers, Devon
    Mohawesh, Rami
    Chellaboina, Venkata Ishwarya
    Sathvik, Anantha Lakshmi
    Venkatesh, Praveen
    Ho, Yi-Hui
    Henshaw, Hanna
    Alhawawreh, Muna
    Berdik, David
    Jararweh, Yaser
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (01): : 1 - 26
  • [2] The Social Opportunities and Challenges in the Era of Large Language Models
    Huimin C.
    Zhiyuan L.
    Maosong S.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (05): : 1094 - 1103
  • [3] Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts
    Kolides, Adam
    Nawaz, Alyna
    Rathor, Anshu
    Beeman, Denzel
    Hashmi, Muzammil
    Fatima, Sana
    Berdik, David
    Al-Ayyoub, Mahmoud
    Jararweh, Yaser
    SIMULATION MODELLING PRACTICE AND THEORY, 2023, 126
  • [4] Artificial Intelligence in Dental Education: Opportunities and Challenges of Large Language Models and Multimodal Foundation Models
    Claman, Daniel
    Sezgin, Emre
    JMIR MEDICAL EDUCATION, 2024, 10
  • [5] On opportunities and challenges of large multimodal foundation models in education
    Kuechemann, Stefan
    Avila, Karina E.
    Dinc, Yavuz
    Hortmann, Chiara
    Revenga, Natalia
    Ruf, Verena
    Stausberg, Niklas
    Steinert, Steffen
    Fischer, Frank
    Fischer, Martin
    Kasneci, Enkelejda
    Kasneci, Gjergji
    Kuhr, Thomas
    Kutyniok, Gitta
    Malone, Sarah
    Sailer, Michael
    Schmidt, Albrecht
    Stadler, Matthias
    Weller, Jochen
    Kuhn, Jochen
    NPJ SCIENCE OF LEARNING, 2025, 10 (01)
  • [6] Benchmarking Large Language Models: Opportunities and Challenges
    Hodak, Miro
    Ellison, David
    Van Buren, Chris
    Jiang, Xiaotong
    Dholakia, Ajay
    PERFORMANCE EVALUATION AND BENCHMARKING, TPCTC 2023, 2024, 14247 : 77 - 89
  • [7] Large language models in psychiatry: Opportunities and challenges
    Volkmer, Sebastian
    Meyer-Lindenberg, Andreas
    Schwarz, Emanuel
    PSYCHIATRY RESEARCH, 2024, 339
  • [8] Large Language and Emerging Multimodal Foundation Models: Boundless Opportunities
    Forghani, Reza
    RADIOLOGY, 2024, 313 (01)
  • [9] ChatGPT and large language models in academia: opportunities and challenges
    Jesse G. Meyer
    Ryan J. Urbanowicz
    Patrick C. N. Martin
    Karen O’Connor
    Ruowang Li
    Pei-Chen Peng
    Tiffani J. Bright
    Nicholas Tatonetti
    Kyoung Jae Won
    Graciela Gonzalez-Hernandez
    Jason H. Moore
    BioData Mining, 16
  • [10] ChatGPT and large language models in academia: opportunities and challenges
    Meyer, Jesse G.
    Urbanowicz, Ryan J.
    Martin, Patrick C. N.
    O'Connor, Karen
    Li, Ruowang
    Peng, Pei-Chen
    Bright, Tiffani J.
    Tatonetti, Nicholas
    Won, Kyoung Jae
    Gonzalez-Hernandez, Graciela
    Moore, Jason H.
    BIODATA MINING, 2023, 16 (01)