When Sally Met Harry or When AI Met HPC

被引:0
|
作者
Cortés U. [1 ]
Moya U. [2 ]
Valero M. [1 ,3 ]
机构
[1] Universitat Politécnica de Catalunya, Catalunya
[2] Gobierno del Estado de Jalisco, Jalisco
[3] Barcelona Supercomputing Center, Catalunya
来源
关键词
Image recognition - Deep learning - Voltage dividers - Computing power;
D O I
10.14529/210101
中图分类号
学科分类号
摘要
The Artificial Intelligence (AI) explosion which we are witnessing today can also be, at least in part, credited to the current advances in computing power, in particular to High-Performance Computing. This is not a brand new relation as it can be traced from the very beginning of the hardware and AI developments. Maybe the first encounter between AI and hardware dates back to 1958. The perceptron – a more general computational model than McCullochPitts units – was intended to be a programable machine rather than a software program. While its first implementation was in software for the IBM 704, perceptron subfsequently implemented it in custom-built hardware as the Mark 1 perceptron. The perceptron was designed for image recognition: it was an array of 400 photocells, randomly connected to units called neurons. Weights were encoded in potentiometers, and electric engines performed weight updates during the learning phase. A seminal interaction between AI and the hardware design was the use of a perceptron for efficient branch prediction to boost instruction-level parallelism. After the years, the evolution of those neurons brought the inception of the so-called Neural Networks (NN) as software that gave birth to Deep Learning (DL). In turn, to accelerate DL, nowadays the use of GPU’s and more specialized hardware architectures has become the norm (e.g. Cerebras CS-2, SambaNova, INTEL’s Habana, etc.). © The Authors 2021.
引用
收藏
页码:4 / 8
页数:4
相关论文
共 50 条