The world’s fastest and largest computer

Meta, the former Facebook, claimed that it has developed one of the world’s fastest supercomputers, the Research SuperCluster, or RSC. It’s the fastest machine developed for A.I. tasks with 6,080 graphics processing units packed into 760 Nvidia A100 modules.

That computing capacity is comparable to the Perlmutter supercomputer, which employs over 6,000 Nvidia GPUs and is now the world’s fifth-fastest supercomputer. In a second phase, Meta intends to increase performance by a factor of 2.5 this year by expanding to 16,000 GPUs.

Meta will use RSC for a host of research projects that require next-level performance, such as multimodal A.I., a new A.I. paradigm in which different data kinds (images, text, speech, and numerical data), are combined with different intelligence processing techniques to get better results.

In the following video, there’s a demonstration of what Multimodal Artificial Intelligence is, according to Aimesoft.

Meta is hoping RSC will be helpful for its metaverse. RSC could be powerful enough to interpret speech for a large number of people who each speak a different language at the same time.

According to Meta researchers Kevin Lee and Shubho Sengupta, RSC is around 20 times faster than its prior 2017-era Nvidia machine when it comes to training an A.I. system to recognize what’s in a photo while decoding speech is 3 times faster.

RSC may also be useful to perform a new A.I. learning approach called self-supervised learning in which A.I. models are trained with unlabelled data, unlike supervised training which uses labeled data.

Meta and other A.I. proponents have demonstrated that training A.I. models with increasingly larger data sets produces better results. It takes far more computer power to train A.I. models than it does to run them.

>>>  DeepMind’s Gopher surpassed GPT-3

The GPU, which is normally developed for accelerated graphics, now it’s used for many other computer tasks, including handling the A.I.

In fact, Nvidia’s cutting-edge A100 chips are designed primarily for artificial intelligence and other heavy-duty data center activities. Big companies like Google, as well as a slew of startups, are developing dedicated A.I. processors, some of which are the world’s largest CPUs. When paired with Facebook’s own PyTorch A.I. software, an open-source machine learning toolkit that specializes in tensor calculations, automated differentiation, and GPU acceleration, the A100 GPU base is relatively flexible. PyTorch is one of the most popular deep-learning libraries for these reasons.

The metaverse will surely play a key role in increasing the research for a more powerful and optimized Artificial Intelligence with all the pros and cons of using a modal A.I. that handles a huge world like the metaverse.

Source cnet.com