Samsung is framing its latest foray into the realm of processing-in-memory (PIM) and processing-near-memory (PNM) as a means to boost performance and lower the costs of running AI workloads. 

The company has dubbed its latest proof-of-concept technology, which it unveiled at Hot Chips 2023, as CXL-PNM. This is a 512GB card with up to 1.1TB/s of bandwidth, according to Serve the Home

It would help to solve one of the biggest cost and energy sinks in AI computing, which is the movement of data between storage and memory locations on computing engines.

Samsung’s testing shows it’s 2.9 times more energy efficient than a single A-GPU, with a cluster of eight CXL-PNMs 4.4 times more energy efficient than eight A-GPUs. This is in addition to an appliance fitted with the card emitting 2.8 times less CO2, and boasting 4.3 times more operation efficiency and environmental efficiency. 

It relies on compute express link (CXL) technology, which is an open standard for a high-speed processor-to-device and processor-to-memory interface that paves the way for more efficient use of memory and accelerators with processors.

The firm believes this card can offload workloads onto PIM or PNM modules, which is something it’s also explored in its LPDDR-PIM. It will save costs and power consumption, Samsung claims, as well as extend battery life in devices by preventing the over-provisioning of memory for bandwidth.

Samsung’s LPDDR-PIM boosts performance by 4.5 times versus in-DRAM processing and reduces energy usage by using the PIM module. Despite achieving an internal bandwidth of just 102.4GB/s, however, it keeps computing on the memory module and there’s no need to transmit data back to the CPU.

Samsung has been exploring technologies like this for some years, although the CXL-PNM is the closest it has come to date to incorporate it into what might soon become a viable product. This also follows its 2022 HBM-PIM prototype. 

Made in collaboration with AMD, Samsung applied its HBM-PIM card to large-scale AI applications. The addition of HBM-PIM boosted performance by 2.6%, while increasing energy efficiency by 2.7%, against existing GPU accelerators.

The race to build the next generation of components fit to handle the most demanding AI workloads is well and truly underway. Companies from IBM to d-Matrix are drawing up technologies that aim to oust the best GPUs.

Go to Source

Follow us on FacebookTwitter and InstagramWe are growing. Join our 6,000+ followers and us.

At will strive to help turn Tech Rookies into Pros!

Want more articles click Here!

Deals on Homepage!

M1 Finance is a highly recommended brokerage start investing today here!

WeBull. LIMITED TIME OFFER: Get 3 free stocks valued up to $6300 by opening & funding a #Webull brokerage account! “>Get started >Thanks for visiting!

Subscribe to our newsletters. Here! On the homepage

Tech Rookies Music Here!

Disclaimer: I get commissions for purchases made through links in this post at no charge to you and thanks for supporting Tech Rookies.

Disclosure: Links contain affiliates. When you buy through one of our links we will receive a commission. This is at no cost to you. Thank you for supporting

Disclaimer: This article is for information purposes and should not be considered professional investment advice. It contains some forward-looking statements that should not be taken as indicators of future performance. Every investor has a different risk profile and goals. All investments have risks. Always do your own research or hire an expert before investing and trading.