KAYTUS’s next-generation all-QLC flash solution delivers fully linear performance scaling for massive GPU clusters, while reducing TCO by 70%, enabling ultra-large-scale computing for the era of agentic AI.
SINGAPORE--(BUSINESS WIRE)--At AI EXPO KOREA 2026, KAYTUS officially launched its All-QLC Flash Storage Solution, engineered to deliver high performance, massive scalability, and cost efficiency for 10,000-GPU clusters. The solution addresses data-delivery bottlenecks in ultra-large-scale AI training, helping maximize GPU resource utilization.
Based on the KR2280 and KR1180 server platforms, the solution is deeply integrated with industry-leading AI-native parallel file systems to eliminate data silos inherent in traditional tiered storage. Purpose-built for read-intensive AI workloads, it overcomes the horizontal scaling limitations of massive clusters. Verified test-data shows that, at exabyte-scale deployment, the solution delivers 10 TB/s aggregate bandwidth and 100 million IOPS. In addition, it reduces five-year TCO by 70% compared with traditional TLC-based solutions, accelerating model innovation for AI cloud providers and intelligent computing centers.
Limitations in Traditional AI Storage Architectures.
The explosive growth of AI is fundamentally transforming enterprise computing and storage requirements. Large-scale AI model training features highly read-intensive workloads that require tens of thousands of GPUs to concurrently access exabyte-scale datasets with sub-millisecond latency. Traditional storage architectures now face three major challenges:
-
-
Separated Data Silos: Traditional ETL processes require data to be moved from object storage to parallel file systems before training, resulting in time-consuming physical data migration. IDC research indicates that data teams spend 81% of their time on data preparation, slowing business iteration.
-
-
Workload and Media Mismatch: More than 90% of AI training involves high-frequency concurrent reads. In contrast, traditional TLC flash solutions provide excessive write endurance that is unnecessary for these read-intensive workloads, driving up procurement, space, and power costs for exabyte-scale clusters and resulting in inefficient resource utilization.
-
-
Scalability Bottlenecks: Traditional file systems were not designed to handle the I/O burst workloads generated by 10,000-GPU clusters. As clusters scale, metadata lock contention and communication overhead introduce latency spikes and degraded overall performance.
-
KAYTUS Solution: All-QLC Flash Storage for Delivering High Performance, Scalability, and Cost Efficiency.
The next-generation KAYTUS All- QLC Flash Storage Server Solution is purpose-built to unlock the full potential of read-intensive AI training workloads. By tightly integrating flagship compute nodes with industry-leading AI-native parallel file systems, the solution harnesses advanced hardware–software co-design to deliver breakthrough performance, seamless scalability, and superior cost efficiency for ultra-large-scale AI computing environments.
Architectural Innovation: Overcoming AI Training Efficiency Bottlenecks.
The KAYTUS solution establishes a unified namespace with native multi-protocol access across file, object, and block storage. By leveraging high-capacity QLC flash pools and NVMe-oF fully shared interconnects, it redefines the unified data plane for AI storage, effectively eliminating the data silos inherent in traditional tiered architectures. Data can now flow on demand to GPU nodes without cross-system migration, enabling sub-millisecond access, and significantly improving AI training data retrieval efficiency.
-
-
Hardware Optimization: Engineered for read-intensive workloads, the solution features a PCIe 5.0 direct-connect architecture that doubles single-node I/O bandwidth compared to the previous generation. Combined with NUMA-balanced optimization, it effectively eliminates internal throughput bottlenecks.
-
-
Software Synergy: The solution integrates NFS over RDMA and native GPU Direct Storage technology, enabling direct data paths from QLC flash to GPU memory. By leveraging a disaggregated architecture that decouples protocol processing from storage states, it eliminates east-west traffic and achieves fully linear scaling of bandwidth and throughput, from petabyte to exabyte scale.
-
10,000-GPU Cluster Benchmarks: Exceptional Performance, Scalability, and Cost Efficiency
In benchmark testing in an exabyte-scale storage environment for a 10,000-GPU data center, the solution—powered by KR2280 and KR1180 nodes and optimized with industry-leading AI-native parallel file systems—demonstrated its capability to scale seamlessly to support computing clusters of up to 10,000 GPUs.
-
-
Extreme Performance at Scale: The system delivers 10 TB/s sustained aggregate read bandwidth and 100 million random-read IOPS, enabling concurrent access for tens of thousands of GPUs. Performance scales linearly as additional nodes are added, while GPU utilization remains consistently above 95%, with no storage-side lock contention or queuing, effectively eliminating GPU data starvation.
-
-
Superior Cost Efficiency: Compared with traditional TLC all-flash solutions, the solution reduces five-year TCO by 70%, cuts power and cooling costs by more than 75%, helping enterprises avoid overpaying for unnecessary extra write endurance.
-
|
Metric (1 EB Capacity) |
TLC SSD Solution |
QLC SSD Solution |
Difference |
|
CAPEX |
1.0 |
0.39 |
65% ↓ |
|
Power Cost |
1.0 |
0.29 |
75% ↓ |
|
5-Year TCO |
1.0 |
0.36 |
70% ↓ |
|
(Note: Based on 15.36T TLC vs 61.44T QLC drive units) |
|||
KAYTUS All-Flash Portfolio: From High Density to Massive Capacity.
KAYTUS offers a comprehensive QLC product portfolio supporting single-drive capacities of up to 122.88 TB.
-
-
KR1180 (1U10) — High-Density Excellence: Delivers 1 PB of capacity and 140 GB/s bandwidth in a 1U chassis, featuring optimized air cooling and 18% latency improvement under GPU workloads.
-
-
KR2280 (2U24) — Versatile Flagship: Supports 24 QLC drives and seven PCIe 5.0 slots. It is compatible with both Intel and AMD platforms and offers liquid-cooling options for high-efficiency data centers.
-
-
KR4266 (4U60) — Big Data Massive Storage: Offers industry-leading physical density with up to 7 PB per unit, delivering 260 GB/s sequential read bandwidth and 20 million IOPS.
-
About KAYTUS
KAYTUS is a leading provider of AI infrastructure and liquid cooling solutions, delivering a diverse range of innovative, open, and eco-friendly products for cloud, AI, edge computing, and other emerging applications. With a customer-centric approach, KAYTUS is agile and responsive to user needs through its adaptable business model. Discover more at KAYTUS.com and follow us on LinkedIn and X
-
KAYTUS Launches All-QLC Flash Storage at AI EXPO 2026 for 10,000-GPU ClustersKAYTUS’s next-generation all-QLC flash solution delivers fully linear performance scaling for massive GPU clusters, while reducing TCO by 70%, enabli2026-05-11
-
IT截止日期:Rosen Law Firm敦促Gartner, Inc. (NYSE: IT)股东与律所联系以了解有关其合法权益的信息全球投资者权益律师事务所Rosen Law Firm提醒投资者注意一项代表在2025年2月4日至2026年2月2日期间购买Gartner, Inc. (NYSE: IT)普通股的投资者提起的集体诉讼。2026-05-11
-
ODD截止日期:Rosen Law Firm敦促ODDITY Tech Ltd. (NASDAQ: ODD)股东与律所联系以了解有关其合法权益的信息全球性投资者权益律师事务所Rosen Law Firm提醒投资者注意一项代表在2025年2月26日至2026年2月24日期间购买ODDITY Tech Ltd. (NASDAQ: ODD)证券的投资者提起的集2026-05-11
-
Andersen Global通过新增纳米比亚成员公司拓展非洲业务平台Andersen Global通过新增Andersen in Namibia推进其在非洲地区的业务发展;Windhoek Advisory & Taxation正式启用Andersen品牌,增强了其为在非洲南部充满活力的新兴2026-05-11
-
Verdantis推出“全球首个AI原生备件情报平台”MRO360Verdantis今日宣布在全球推出MRO360,这是一个专为资产密集型企业打造的AI平台,旨在重塑其MRO备件库存的管理方式。MRO360专为制造商、油气运营商、矿业公司、公用事2026-05-11
-
AMD股价暴跌17%创近9年之最,苏姿丰紧急回应:AI增速远超想象
-
Ledger 中国销售渠道说明:广州馨潇贸易有限公司官方直营渠道公示
-
江苏省脑机接口产业联盟在宁成立,麦澜德分享前沿成果
-
艾芬达入选国家知识产权强国建设示范创建对象:二十载长期主义,兑现每一份用户价值
-
Esentia宣布成功完成2033年到期的6.125%优先票据和2038年到期的6.500%优先票据的定价
-
慧启赣疆 聚势共赢丨慧友酒店集团江西品鉴会书写区域文旅融合新篇
-
电影《一秒》定档:2026年,活在这一秒
-
西藏斜视患儿寒假进京手术成功,千里护航点亮视觉未来
-
年度盛典|卓兴半导体2025年度总结表彰暨 2026 年迎新晚会
-
倒计时21天!2026未来医疗医药100强大会议程再刷新
