Artificial intelligence foundation and pre-trained models: Fundamentals, applications, opportunities, and social impacts

被引:29
|
作者
Kolides, Adam [1 ]
Nawaz, Alyna [1 ]
Rathor, Anshu [1 ]
Beeman, Denzel [1 ]
Hashmi, Muzammil [1 ]
Fatima, Sana [1 ]
Berdik, David [1 ]
Al-Ayyoub, Mahmoud [2 ]
Jararweh, Yaser [1 ]
机构
[1] Duquesne Univ, Pittsburgh, PA USA
[2] Jordan Univ Sci & Technol, Irbid, Jordan
关键词
Pre-trained models; Self-supervised learning; Natural Language Processing; Computer vision; Image processing; Transformers; Machine learning models; Foundation models in robotics; Transfer learning; In-context learning; Self-attention; Fine-tuning;
D O I
10.1016/j.simpat.2023.102754
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
With the emergence of foundation models (FMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, AI is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-3, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, only large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs. We attempt to analyze and examine the main capabilities, key implementations, technological fundamentals, and socially constructed possible consequences of these models inside this research. Despite the expected widely publicized use of FMs, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure. Throughout the investigation, we will also have to deal with the problem of misrepresentation created by these systems. If FMs live up to their promise, AI might see far wider commercial use. As researchers studying the ramifications on society, we believe FMs will lead the way in massive changes. They are closely managed for the time being, so we should have time to comprehend their implications before they become a major concern.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Foundation and large language models: fundamentals, challenges, opportunities, and social impacts
    Myers, Devon
    Mohawesh, Rami
    Chellaboina, Venkata Ishwarya
    Sathvik, Anantha Lakshmi
    Venkatesh, Praveen
    Ho, Yi-Hui
    Henshaw, Hanna
    Alhawawreh, Muna
    Berdik, David
    Jararweh, Yaser
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (01): : 1 - 26
  • [2] Foundation and large language models: fundamentals, challenges, opportunities, and social impacts
    Devon Myers
    Rami Mohawesh
    Venkata Ishwarya Chellaboina
    Anantha Lakshmi Sathvik
    Praveen Venkatesh
    Yi-Hui Ho
    Hanna Henshaw
    Muna Alhawawreh
    David Berdik
    Yaser Jararweh
    Cluster Computing, 2024, 27 : 1 - 26
  • [3] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    ENGINEERING, 2023, 25 : 51 - 65
  • [4] On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets
    Chiang, Cheng-Han
    Lee, Hung-yi
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10518 - 10525
  • [5] The ChatGPT After: Opportunities and Challenges of Very Large Scale Pre-trained Models
    Lu J.-W.
    Guo C.
    Dai X.-Y.
    Miao Q.-H.
    Wang X.-X.
    Yang J.
    Wang F.-Y.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (04): : 705 - 717
  • [6] Refining Pre-Trained Motion Models
    Sun, Xinglong
    Harley, Adam W.
    Guibas, Leonidas J.
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 4932 - 4938
  • [7] Efficiently Robustify Pre-Trained Models
    Jain, Nishant
    Behl, Harkirat
    Rawat, Yogesh Singh
    Vineet, Vibhav
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5482 - 5492
  • [8] Pre-trained Models for Sonar Images
    Valdenegro-Toro, Matias
    Preciado-Grijalva, Alan
    Wehbe, Bilal
    OCEANS 2021: SAN DIEGO - PORTO, 2021,
  • [9] Grounding Ontologies with Pre-Trained Large Language Models for Activity Based Intelligence
    Azim, Anee
    Clark, Leon
    Lau, Caleb
    Cobb, Miles
    Jenner, Kendall
    SIGNAL PROCESSING, SENSOR/INFORMATION FUSION, AND TARGET RECOGNITION XXXIII, 2024, 13057
  • [10] On the Power of Pre-Trained Text Representations: Models and Applications in Text Mining
    Meng, Yu
    Huang, Jiaxin
    Zhang, Yu
    Han, Jiawei
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4052 - 4053