SK hynix is teasing 300TB SSDs as it pushes into AI

 SK Hynix.
SK Hynix.

SK hynix is working on a solid-state drive of unprecedented 300TB capacity, the company revealed at a press conference in Seoul, South Korea, on Thursday. The drive was pre-announced as a part of a broader product portfolio of products and technologies designed to evolve both datacenter and on-device AI capabilities.

Market researchers cited by SK hynix believe that the total volume of data generated globally in the AI era (generated both by humans and AI) will zoom to 660 zettabytes in 2030 from 15 ZB in 2014. This gigantic bucket will have to be stored somewhere, which is where 100TB HDDs and 300TB SSDs will come into play.

For now, little is known about SK hynix's 300TB SSDs except the fact that demand for high-capacity high-performance storage will skyrocket in the coming years. To that end, both high-capacity drives and high-performance all-flash-arrays will be necessary for a variety of applications.

Most of what we can do is speculate about SK Hynix's 300TB SSDs. It is possible that the company is developing a rival for Samsung's PBSSD initiative that for now is limited to machines that can store up to 240TB of data. In SK hynix's case, the system would store 300TB of data. Such machines are designed to offer a competitive balance of capacity, performance per TB, reliability, and energy efficiency.

Alternatively, SK hynix's 300TB SSD initiative could be a rival for Nimbus Data's 3.5-inch ExaDrive products that can store up to 100TB (for now), though we have reasonable doubts about this as such SSDs are niche products with rather inferior performance-per-TB.

Finally, it could be a custom-built PCIe card SSD, but again, a 300TB drive even with a PCIe 6.0 x16 interface would offer rather low per-TB performance, which would make it a niche product (then again, we are talking about 300TB SSDs here).

In addition to 300TB SSDs, SK Hynix is working on a variety of products that could be useful both for datacenter AI training and inference (HBM4, HBM4E, CXL Pooled Memory Solutions, Processing-In-Memory solutions), for edge AI devices (LPDDR6, GDDR7, PIM), and for on-device AI inference (LPDDR6, GDDR7, high-capacity DDR5).