Tech

Graphcore’s Colossus GC200 7nm Chip Competes Against The NVIDIA A100 GPU With Colossal Design & 250 TFLOPs AI Performance – 59.4 Billion Transistors In An 823mm2 Die

Published

on

The AI segment is seeing rapid progress with major tech companies pooling in lots of resources to keep up with the demand for higher performance each year. We’ve seen NVIDIA and AMD actively building next-generation GPUs specifically with AI and HPC in mind but it looks like competition has arrived from British AI chip designer, Graphcore, who has unveiled its second-generation chip for AI which directly competes against NVIDIA’s A100 Tensor Core GPU accelerator.

Graphcore’s GC200 Is A Massive 7nm Chip For AI Tasks Which Is Designed To Compete Against NVIDIA’s A100 GPU – IPU Delivers Up To 250 Teraflops of AI Compute

For this purpose, Graphcore has announced its new Colossus MK2 GC200 IPU or an Intelligence Processing Unit which is designed exclusively to power machine intelligence. Just like its name, the chip itself features a colossus design and delivers an 8x performance bump over its predecessor, the MK1.

“We’re 100% focused on silicon processors for AI, and on building systems that can plug into existing centers. Why would we want to build CPUs or GPUs if those already work well? This is just a different toolbox.” via Graphcore’s CEO, Nigel Toon

The Colossus MK2 GC200 is fabricated on TSMC’s 7nm process node and features a die size of 823 mm2. For comparison, that’s almost as big as the NVIDIA A100 GPU accelerator which measures at 826mm2. The chip is not only a behemoth in terms of size but also in terms of density with a total of 59.4 Billion transistors onboard compared to 54.2 Billion transistors on the NVIDIA A100 GPU. It shows a higher density on the Graphcore chip than NVIDIA’s flagship chip accelerator.

To make the GC200 work, it is configured with 1472 IPU titles, each with an IPU core & In-processor memory. Each IPU core has 6 threads executing in parallel which put the total number of threads in the chip at 8832 (1472 cores / serial processor). For memory, the chip makes use of an on-die solution which offers 900 MB capacity per IPU and a memory bandwidth of 47.5 TB/s. Graphcore has gone with a smaller capacity but the higher-bandwidth solution and stated that you can theoretically get more capacity when using several racks at once and the memory pool would end up higher when compared to a rack composed of A100 GPUs.

Read More

————–

By: Hassan Mujtaba
Title: Graphcore’s Colossus GC200 7nm Chip Competes Against The NVIDIA A100 GPU With Colossal Design & 250 TFLOPs AI Performance – 59.4 Billion Transistors In An 823mm2 Die
Sourced From: wccftech.com/graphcores-colossus-mk2-gc200-7nm-ai-chip-rivals-nvidia-a100-gpu/
Published Date: Wed, 15 Jul 2020 14:56:20 +0000

Did you miss our previous article…
https://getinvestmentadvise.com/tech/whats-in-a-word-overhead/

Trending

Exit mobile version