Meet us next:
Future Forward
🇬🇧
Paris Space Week 2026
🇫🇷
Meet us next:
🇫🇷
Paris Space Week 2026
All events →

Why SSD prices are rising and what can imaging teams do about it

For decades, the cost of digital storage followed a predictable rule: every year, you could store more data for less money. That rule is now breaking, and SSDs are at the center of the shift.

Why SSD supply is tight

SDs are built with NAND flash memory, the chips inside every SSD. These chips are made by only a few manufacturers, who invested heavily in building up fabrication plants (fabs). When the fabs first came online, there was overproduction, resulting in prices lower than what manufacturers had hoped for. Because building new fabs costs tens of billions of dollars and can push prices down further, producers are now being more disciplined with their investments and production volumes.

Physical and economic constraints of SSD scaling

NAND manufacturing has not yet reached hard physical limits. Unlike CPUs or GPUs, NAND flash can be stacked vertically in many layers, currently more than 200, with roadmaps pointing beyond 500. In theory, this allows continued increases in storage density.

In practice, however, each additional layer makes engineering more complex. Manufacturing tolerances tighten, yield management becomes harder, and testing grows more expensive. Progress continues, but at increasing cost and technical risk.

Another major lever has been storing more bits in each memory cell by using finer analogue voltage levels. A single-level cell (SLC) stores one bit with a voltage margin of roughly 2 volts. Quad-level cell (QLC) technology stores four bits per cell, reducing the margin to around 100 millivolts while quadrupling capacity. Five bits per cell (PLC) are on the roadmap, but the narrower voltage window would significantly reduce write endurance, potentially to only a few hundred cycles.

The limiting factor is therefore no longer physics alone. It is economics. Each incremental improvement requires more complex engineering and more expensive fabrication infrastructure. Those costs ultimately have to be recovered in the price of the chips.

Enterprise demand is driving prices up

Data is moving to the cloud, and large datacenters now account for a growing share of SSD demand. For these operators, the cost of the drive itself is only a fraction of total system cost. Storing and retrieving data requires compute, power, cooling, networking, redundancy, and physical space. As throughput requirements increase, infrastructure costs quickly outweigh the price of the storage media.

Enterprise SSDs are therefore optimized for sustained performance and reliability rather than minimum unit cost. They use different interfaces (SAS, U.2, E1.S), more robust controllers, higher write endurance, and transfer rates that do not throttle under continuous load. In this environment, performance and stability matter more than headline price.

This changes pricing dynamics. When production capacity is constrained, manufacturers prioritize the segment willing to pay for performance. A 7.5TB enterprise drive such as the Samsung PM1653 costs more than $3,800 and may have multi-week lead times. A consumer 8TB drive costs roughly $1,200 and ships immediately. The price difference is significant — but for datacenter operators, it is small relative to total infrastructure cost.

Enterprise buyers, not consumers, increasingly set the economics of the market.

Why HDDs are not a real alternative

Hard Disk Drives have a lot of capacity and are very good in many respects: you can write to them many times without risk, and data can last 5 years or more. They have also improved over the last decade, moving from 10TB per disk to 32TB today. However, the price has roughly tripled, from $399 for 10TB at the time to $1,200 for 32TB today.

What has not changed is throughput. These devices are still very slow. Nominally, they can achieve 250MB/s, but that assumes writing a single large file at a time. In real-world benchmarks[^1], performance drops to around 35MB/s. At that rate, reading a full 32TB drive would take more than a week. By comparison, four enterprise SSDs would match that capacity but read the same data volume in about 30 minutes. You don't want your 10kW, $500,000 Nvidia HGX server sitting idle waiting on data. Spending more on fast storage makes sense.

Writes are also costly on SSDs: QLC NAND cells are limited to around 1,000 program/erase cycles before becoming unreliable. An enterprise SSD running at full write speed would exhaust those cycles in about 20 days. This means using SSDs purely as cache is not always feasible, since cache workloads can be very write-intensive. Managing many storage tiers gets complicated, and companies like VAST Data have successfully promoted all-flash architectures — SSDs only, no HDDs — where data is natively fast without complex tiering.

All of this creates financial uncertainty around investing in HDD manufacturing, which is also very expensive, and results in a slowdown of HDD development.

Demand from AI is accelerating storage pressure

The rapid expansion of AI infrastructure is intensifying the mismatch between what storage systems can deliver and what modern workloads require. Hundreds of billions of dollars are being invested in AI infrastructure. GPU memory bandwidth is extremely high, but GPUs must be fed data at comparable speed. Increasingly, that data is not just text: it includes images and video, which are orders of magnitude larger in raw form.

This shift is structural. AI trained purely on text has limited grounding in the physical world. Training on images and video expands model capability, but it also requires vast storage capacity and sustained data throughput. The scale becomes clear in industries such as autonomous driving, where vehicles generate large volumes of image data to train and validate perception models. At 50 frames per second, a single 16-bit, 8-megapixel camera generates nearly 3TB per hour. A vehicle with three such cameras can produce roughly 50TB in an eight-hour shift. Collecting 100,000 driving hours across a fleet pushes total raw data into the multi-petabyte range.

Training data is only part of the picture. AI models themselves are growing rapidly and must be stored close to where inference occurs. They also need to be checkpointed repeatedly during training, increasing both storage footprint and write pressure.

Why imaging teams are hit hardest

Imaging workloads are structurally more exposed to rising storage costs than most other AI applications.

Unlike many text-based systems, image data rarely becomes obsolete. In Earth observation, historical imagery gains value over time and is often tied to contractual access requirements. In pharma and medical imaging, raw data must be retained to meet reproducibility standards and FAIR principles. In automotive and industrial systems, sensor data collected for one model version is frequently revisited for retraining, validation, safety audits, or regulatory review. Storage does not shrink when projects end; it accumulates.

Operational workflows further increase effective footprint. Ingestion, preprocessing, tiling, augmentation, and checkpointing generate additional data copies. Training requires random access across large datasets, making low-throughput storage impractical and increasing reliance on high-performance SSD infrastructure.

For business leaders, the impact appears in budgets rather than architecture diagrams: storage costs growing faster than expected, cloud bills scaling disproportionately, and internal debates about what data can be retained. Imaging teams feel the pressure first because their data must be kept, reused, and accessed intensively over long periods of time.

What can you do about it: scale through data efficiency

The era of predictable storage cost decline is ending. If storage media no longer becomes cheaper each year, scaling infrastructure means scaling cost. For imaging systems generating tens or hundreds of terabytes per day, adding hardware becomes an increasingly expensive and complex response. When hardware scaling loses its economic advantage, the remaining structural lever is data volume.

In imaging-heavy systems, storage capacity, bandwidth, endurance, cloud transfer, and compute utilization are directly proportional to the amount of data written and read. Reducing data volume relieves pressure across the entire pipeline.

Even moderate reductions in raw image data can:

• Increase effective storage capacity
• Lower cloud and replication costs
• Reduce write amplification and extend SSD lifetime
• Improve sustained throughput
• Increase utilization of high-cost compute infrastructure

This is not about deleting data or compromising quality. It is about preserving information while reducing its physical footprint.

One way to increase efficiency at the data layer is through Jetraw.

Jetraw is a Swiss image compression technology designed for high-throughput imaging systems and AI pipelines in mission-critical environments. It is used in Earth observation, pharmaceutical research, and automotive systems where large volumes of raw image data must be stored, transferred, and repeatedly accessed.

In practice, Jetraw reduces raw image data volumes by 5×–8× while preserving the statistical properties required for reliable machine learning. By acting at the data layer rather than expanding infrastructure, it eases pressure on storage capacity, bandwidth, and SSD endurance while realigning storage economics with imaging workload growth.

When storage becomes structural, data efficiency becomes infrastructure strategy.

Conclusion

Storage economics are shifting. The long-standing assumption that capacity becomes cheaper every year can no longer be taken for granted. At the same time, AI workloads — particularly imaging-heavy systems — continue to grow in volume, retention time, and throughput requirements.

When storage costs stabilize and system complexity increases, infrastructure expansion alone is not a sustainable strategy. For imaging-driven AI, storage is no longer a background component; it becomes a constraint on both technical performance and business scalability.

In this environment, data efficiency moves from optimization to architecture. Reducing data volume at the source is the most direct way to control cost, improve throughput, and extend hardware lifetime without expanding infrastructure.

Exploring Jetraw for your workflow

If storage cost, SSD endurance, or data transfer bottlenecks are limiting your imaging pipeline, we would be glad to explore how Jetraw integrates into your workflow and what impact it can have on capacity, throughput, and total cost of ownership.

Jetraw for Automotive

Jetraw for Earth Observation

Jetraw for Life sciences

Contact us: get@dotphoton.com

newsletter

Get product updates and industry insights in your inbox