A new chip cluster will enable larger AI models.

The design can run a large neural network more efficiently than banks of GPUs. But making and operating a chip is a challenge, requiring new methods to stretch the silicon’s properties, a design that incorporates manufacturing flaws, and keeping the giant chip cool. A new water system for

To create a cluster of WSE-2 chips capable of running record-sized AI models, Cerberus had to solve another engineering challenge: how to retrieve data inside and outside the chip. Regular chips have their own memory on board, but Cerberus developed an off-chip memory box called Memory X. The company has also developed software that allows the neural network to be partially stored in this off-chip memory, in which only the accounts are locked on the silicone chip. And he created a hardware and software system called SwarmX that connects everything together.

Photo: Brain.

Mike Daimler, senior analyst and senior editor at Linley Group, says: “They can greatly improve the ability to expand training. Microprocessor report.

Daimler says it’s not yet clear how much market there will be for the cluster, especially since some potential customers are already designing their own, more specific chips at home. He added that the actual performance of the chip in terms of speed, performance and cost is not yet clear. Cerberus has not yet published any benchmark results.

“The new Memory X and Swaram X technology have very impressive engineering,” says Daimler. “But like the processor, it’s something very special.” It only makes sense to train the biggest models.

Cerberus chips have so far been adopted by labs that require supercomputing power. Early clients include Argonne National Labs, Lawrence Livermore National Lab, pharma companies including GlaxoSmithKline and AstraZeneca, which Feldman describes as “military intelligence” organizations.

This suggests that the Cerberus chip can be used more than just powering the neural network. Counting in these labs also involves a large number of parallel mathematical operations. “And they’re always thirsty for more computing power,” says Daimler, who says the chip could be conceptually important for the future of supercomputing.

David Cantor, an analyst at Real World Technologies and executive director of MLCommons, which measures the performance of various AI algorithms and hardware, says he generally sees a future market for very large AI models. “I generally believe in data-centric ML, so we want bigger datasets that enable us to build larger models with more parameters,” says Canter.

.

Leave a Reply

Your email address will not be published.