A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data when powered off. Modern HDDs are typically in the form of a small rectangular box, possibly in a disk enclosure for portability.
Hard disk drives were introduced by IBM in 1956, and were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like mobile phones and tablets, rely on flash memory storage devices. More than 224 companies have produced HDDs historically, though after extensive industry consolidation, most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced (exabytes per year) for servers. Though production is growing slowly (by exabytes shipped), sales revenues and unit shipments are declining, because solid-state drives (SSDs) have higher data-transfer rates, higher areal storage density, somewhat better reliability, and much lower latency and access times.
The revenues for SSDs, most of which use NAND flash memory, slightly exceeded those for HDDs in 2018. Flash storage products had more than twice the revenue of hard disk drives as of 2017[update]. Though SSDs have four to nine times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important. As of 2017[update], the cost per bit of SSDs was falling, and the price premium over HDDs had narrowed.
The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes, where 1 gigabyte = 1 000 megabytes = 1 000 000 kilobytes (1 million) = 1 000 000 000 bytes (1 billion). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. There can be confusion regarding storage capacity since capacities are stated in decimal gigabytes (powers of 1000) by HDD manufacturers, whereas the most commonly used operating systems report capacities in powers of 1024, which results in a smaller number than advertised. Performance is specified as the time required to move the heads to a track or cylinder (average access time), the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally, the speed at which the data is transmitted (data rate).
The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops and servers. HDDs are connected to systems by standard interface cables such as SATA (Serial ATA), USB, SAS (Serial Attached SCSI), or PATA (Parallel ATA) cables.
History
A partially disassembled IBM 350 hard disk drive (RAMAC) | |||||||||||||||||||||||||||||||||||||
| Date invented | December 24, 1954 | ||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Invented by | IBM team led by Rey Johnson | ||||||||||||||||||||||||||||||||||||
ComponentsThe actuator is a permanent magnet and moving coil motor that swings the heads to the desired position. A metal plate supports a squat neodymium–iron–boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives have only one magnet). The voice coil itself is shaped rather like an arrowhead and is made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the center of the actuator bearing) then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head. The HDD's electronics controls the movement of the actuator and the rotation of the disk and transfers data to or from a disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology) or segments interspersed with real data (in the case of embedded servo, otherwise known as sector servo technology). The servo feedback optimizes the signal-to-noise ratio of the GMR sensors by adjusting the voice coil motor to rotate the arm. A more modern servo system also employs milli or micro actuators to more accurately position the read/write heads. The spinning of the disks uses fluid-bearing spindle motors. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media that have failed.
Error rates and handlingModern drives make extensive use of error correction codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data. In the newest drives, as of 2009[update], low-density parity-check codes (LDPC) were supplanting Reed–Solomon; LDPC codes enable performance close to the Shannon limit and thus provide the highest storage density available. Typical hard disk drives attempt to "remap" the data in a physical sector that is failing to a spare physical sector provided by the drive's "spare sector pool" (also called "reserve pool"), while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) feature counts the total number of errors in the entire HDD fixed by ECC (although not on all hard drives as the related S.M.A.R.T attributes "Hardware ECC Recovered" and "Soft ECC Correction" are not consistently supported), and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure. The "No-ID Format", developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located. Only a tiny fraction of the detected errors end up as not correctable. Examples of specified uncorrected bit read error rates include:
Within a given manufacturers model the uncorrected bit error rate is typically the same regardless of capacity of the drive. The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions while others originate elsewhere in the connection between the drive and the host. DevelopmentThe rate of areal density advancement was similar to Moore's law (doubling every two years) through 2010: 60% per year during 1988–1996, 100% during 1996–2003 and 30% during 2003–2010. Speaking in 1997, Gordon Moore called the increase "flabbergasting", while observing later that growth cannot continue forever. Price improvement decelerated to −12% per year during 2010–2017, as the growth of areal density slowed. The rate of advancement for areal density slowed to 10% per year during 2010–2016, and there was difficulty in migrating from perpendicular recording to newer technologies. As bit cell size decreases, more data can be put onto a single drive platter. In 2013, a production desktop 3 TB HDD (with four platters) would have had an areal density of about 500 Gbit/in2 which would have amounted to a bit cell comprising about 18 magnetic grains (11 by 1.6 grains). Since the mid-2000s, areal density progress has been challenged by a superparamagnetic trilemma involving grain size, grain magnetic strength and ability of the head to write. In order to maintain acceptable signal-to-noise, smaller grains are required; smaller grains may self-reverse (electrothermal instability) unless their magnetic strength is increased, but known write head materials are unable to generate a strong enough magnetic field sufficient to write the medium in the increasingly smaller space taken by grains. Magnetic storage technologies are being developed to address this trilemma, and compete with flash memory–based solid-state drives (SSDs). In 2013, Seagate introduced shingled magnetic recording (SMR), intended as something of a "stopgap" technology between PMR and Seagate's intended successor heat-assisted magnetic recording (HAMR). SMR utilizes overlapping tracks for increased data density, at the cost of design complexity and lower data access speeds (particularly write speeds and random access 4k speeds). By contrast, HGST (now part of Western Digital) focused on developing ways to seal helium-filled drives instead of the usual filtered air. Since turbulence and friction are reduced, higher areal densities can be achieved due to using a smaller track width, and the energy dissipated due to friction is lower as well, resulting in a lower power draw. Furthermore, more platters can be fit into the same enclosure space, although helium gas is notoriously difficult to prevent escaping. Thus, helium drives are completely sealed and do not have a breather port, unlike their air-filled counterparts. Other recording technologies are either under research or have been commercially implemented to increase areal density, including Seagate's heat-assisted magnetic recording (HAMR). HAMR requires a different architecture with redesigned media and read/write heads, new lasers, and new near-field optical transducers. HAMR shipped commercially in early 2024 after technical issues delayed its introduction by more than a decade, from earlier projections as early as 2009. HAMR's planned successor, bit-patterned recording (BPR), has been removed from the roadmaps of Western Digital and Seagate. Western Digital's microwave-assisted magnetic recording (MAMR), also referred to as energy-assisted magnetic recording (EAMR), was sampled in 2020, with the first EAMR drive, the Ultrastar HC550, shipping in late 2020. Two-dimensional magnetic recording (TDMR) and "current perpendicular to plane" giant magnetoresistance (CPP/GMR) heads have appeared in research papers. Some drives have adopted dual independent actuator arms to increase read/write speeds and compete with SSDs. A 3D-actuated vacuum drive (3DHD) concept and 3D magnetic recording have been proposed. Depending upon assumptions on feasibility and timing of these technologies, Seagate forecasts that areal density will grow 20% per year during 2020–2034. CapacityThe highest-capacity HDDs shipping commercially as of 2025[update] are 36 TB. The capacity of a hard disk drive, as reported by an operating system to the end user, is smaller than the amount stated by the manufacturer for several reasons, e.g. the operating system using some space, use of some space for data redundancy, space use for file system structures. Confusion of decimal prefixes and binary prefixes can also lead to errors. CalculationModern hard disk drives appear to their host controller as a contiguous set of logical blocks, and the gross drive capacity is calculated by multiplying the number of blocks by the block size. This information is available from the manufacturer's product specification, and from the drive itself through use of operating system functions that invoke low-level drive commands. Older IBM and compatible drives, e.g. IBM 3390 using the CKD record format, have variable length records; such drive capacity calculations must take into account the characteristics of the records. Some newer DASD simulate CKD, and the same capacity formulae apply. The gross capacity of older sector-oriented HDDs is calculated as the product of the number of cylinders per recording zone, the number of bytes per sector (most commonly 512), and the count of zones of the drive.[citation needed] Some modern SATA drives also report cylinder-head-sector (CHS) capacities, but these are not physical parameters because the reported values are constrained by historic operating system interfaces. The C/H/S scheme has been replaced by logical block addressing (LBA), a simple linear addressing scheme that locates blocks by an integer index, which starts at LBA 0 for the first block and increments thereafter. When using the C/H/S method to describe modern large drives, the number of heads is often set to 64, although a typical modern hard disk drive has between one and four platters. In modern HDDs, spare capacity for defect management is not included in the published capacity; however, in many early HDDs, a certain number of sectors were reserved as spares, thereby reducing the capacity available to the operating system. Furthermore, many HDDs store their firmware in a reserved service zone, which is typically not accessible by the user, and is not included in the capacity calculation. For RAID subsystems, data integrity and fault-tolerance requirements also reduce the realized capacity. For example, a RAID 1 array has about half the total capacity as a result of data mirroring, while a RAID 5 array with n drives loses 1/n of capacity (which equals to the capacity of a single drive) due to storing parity information. RAID subsystems are multiple drives that appear to be one drive or more drives to the user, but provide fault tolerance. Most RAID vendors use checksums to improve data integrity at the block level. Some vendors design systems using HDDs with sectors of 520 bytes to contain 512 bytes of user data and eight checksum bytes, or by using separate 512-byte sectors for the checksum data. Some systems may use hidden partitions for system recovery, reducing the capacity available to the end user without knowledge of special disk partitioning utilities like diskpart in Windows. FormattingData is stored on a hard drive in a series of logical blocks. Each block is delimited by markers identifying its start and end, error detecting and correcting information, and space between blocks to allow for minor timing variations. These blocks often contained 512 bytes of usable data, but other sizes have been used. As drive density increased, an initiative known as Advanced Format extended the block size to 4096 bytes of usable data, with a resulting significant reduction in the amount of disk space used for block headers, error-checking data, and spacing. The process of initializing these logical blocks on the physical disk platters is called low-level formatting, which is usually performed at the factory and is not normally changed in the field. High-level formatting writes data structures used by the operating system to organize data files on the disk. This includes writing partition and file system structures into selected logical blocks. For example, some of the disk space will be used to hold a directory of disk file names and a list of logical blocks associated with a particular file. Examples of partition mapping scheme include master boot record (MBR) and GUID Partition Table (GPT). Examples of data structures stored on disk to retrieve files include the File Allocation Table (FAT) in the MS-DOS file system and inodes in many UNIX file systems, as well as other operating system data structures (also known as metadata). As a consequence, not all the space on an HDD is available for user files, but this system overhead is usually small compared with user data. Units
In the early days of computing, the total capacity of HDDs was specified in seven to nine decimal digits frequently truncated with the idiom millions. By the 1970s, the total capacity of HDDs was given by manufacturers using SI decimal prefixes such as megabytes (1 MB = 1,000,000 bytes), gigabytes (1 GB = 1,000,000,000 bytes) and terabytes (1 TB = 1,000,000,000,000 bytes). However, capacities of memory are usually quoted using a binary interpretation of the prefixes, i.e. using powers of 1024 instead of 1000. Software reports hard disk drive or memory capacity in different forms using either decimal or binary prefixes. The Microsoft Windows family of operating systems uses the binary convention when reporting storage capacity, so an HDD offered by its manufacturer as a 1 TB drive is reported by these operating systems as a 931 GB HDD. Mac OS X 10.6 ("Snow Leopard") uses decimal convention when reporting HDD capacity. The default behavior of the df command-line utility on Linux is to report the HDD capacity as a number of 1024-byte units. The difference between the decimal and binary prefix interpretation caused some consumer confusion and led to class action suits against HDD manufacturers. The plaintiffs argued that the use of decimal prefixes effectively misled consumers, while the defendants denied any wrongdoing or liability, asserting that their marketing and advertising complied in all respects with the law and that no class member sustained any damages or injuries. In 2020, a California court ruled that use of the decimal prefixes with a decimal meaning was not misleading. Form factors
Beginning in the late 1960s, drives were offered that fit entirely into a chassis that would mount in a 19-inch rack. Digital's RK05 and RL01 were early examples using single 14-inch platters in removable packs, the entire drive fitting in a 10.5-inch-high rack space (six rack units). In the mid-to-late 1980s, the similarly sized Fujitsu Eagle, which used (coincidentally) 10.5-inch platters, was a popular product. With increasing sales of microcomputers having built-in floppy-disk drives (FDDs), HDDs that would fit to the FDD mountings became desirable. Starting with the Shugart Associates SA1000, HDD form factors initially followed those of 8-inch, 5+1⁄4-inch, and 3+1⁄2-inch floppy disk drives. Although referred to by these nominal sizes, the actual sizes for those three drives respectively are 9.5", 5.75" and 4" wide. Because there were no smaller floppy disk drives, smaller HDD form factors such as 2+1⁄2-inch drives (actually 2.75" wide) developed from product offerings or industry standards. As of 2025[update], 2+1⁄2-inch and 3+1⁄2-inch hard disks are the most popular sizes. By 2009, all manufacturers had discontinued the development of new products for the 1.3-inch, 1-inch and 0.85-inch form factors due to falling prices of flash memory, which has no moving parts. While nominal sizes are in inches, actual dimensions are specified in millimeters. Consumer hard drives are commonly sold pre-packaged in disk enclosures, which protect the device and allow attaching them via common general purpose interfaces like USB, allowing the device to remain separate from any computer using it and to be portable. Such enclosures vary widely in size as they are usually not intended to be inserted into a system as a fixed component. Enclosures may also contain multiple hard drives combined as RAID. Performance characteristicsThe factors that limit the time to access the data on an HDD are mostly related to the mechanical nature of the rotating disks and moving heads, including:
Delay may also occur if the drive disks are stopped to save energy. Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to physically proximate areas on the disk. Some computer operating systems perform defragmentation automatically. Although automatic defragmentation is intended to reduce access delays, performance will be temporarily reduced while the procedure is in progress. Time to access data can be improved by increasing rotational speed (thus reducing latency) or by reducing the time spent seeking. Increasing areal density increases throughput by increasing data rate and by increasing the amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data. The time to access data has not kept up with throughput increases, which themselves have not kept up with growth in bit density and storage capacity. Latency
Data transfer rateAs of 2010[update], a typical 7,200-rpm desktop HDD has a sustained "disk-to-buffer" data transfer rate up to 1,030 Mbit/s. This rate depends on the track location; the rate is higher for data on the outer tracks (where there are more data sectors per rotation) and lower toward the inner tracks (where there are fewer data sectors per rotation); and is generally somewhat higher for 10,000-rpm drives. A current, widely used standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s (10-bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today's[as of?] disk-to-buffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using special file-generator tools, then reading back the file. Transfer rate can be influenced by file system fragmentation and the layout of the files. HDD data transfer rate depends upon the rotational speed of the platters and the data recording density. Because heat and vibration limit rotational speed, advancing density becomes the main method to improve sequential transfer rates. Higher speeds require a more powerful spindle motor, which creates more heat. While areal density advances by increasing both the number of tracks across the disk and the number of sectors per track, only the latter increases the data transfer rate for a given rpm. Since data transfer rate performance tracks only one of the two components of areal density, its performance improves at a lower rate. Other considerationsOther performance considerations include quality-adjusted price, power consumption, audible noise, and both operating and non-operating shock resistance. Access and interfacesCurrent hard drives connect to a computer over one of several bus types, including parallel ATA, Serial ATA, SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Some drives, especially external portable drives, use IEEE 1394, or USB. All of these interfaces are digital; electronics on the drive process the analog signals from the read/write heads. Current drives present a consistent interface to the rest of the computer, independent of the data encoding scheme used internally, and independent of the physical number of disks and heads within the drive. Typically, a DSP in the electronics inside the drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction to decode the data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks. Modern interfaces connect the drive to the host interface with a single data/control cable. Each drive also has an additional power cable, usually direct to the power supply unit. Older interfaces had separate cables for data signals and for drive control signals.
Integrity and failureDue to the extremely close spacing between the heads and the disk surface, HDDs are vulnerable to being damaged by a head crash – a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, contamination of the drive's internal enclosure, wear and tear, corrosion, or poorly manufactured platters and heads. The HDD's spindle system relies on air density inside the disk enclosure to support the heads at their proper flying height while the disk rotates. HDDs require a certain range of air densities to operate properly. The connection to the external environment and density occurs through a small hole in the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air density is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about 3,000 m (9,800 ft). Modern disks include temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on most disk drives, excluding sealed drives, such as drives that use helium, where any exposure to outside air would cause a failure – they usually have a sticker next to them, warning the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation. Very high humidity present for extended periods of time can corrode the heads and platters. An exception to this are hermetically sealed, helium-filled HDDs that largely eliminate environmental issues that can arise due to humidity or atmospheric pressure changes. Such HDDs were introduced by HGST in their first successful high-volume implementation in 2013. For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction with the disk surface and can render the data unreadable for a short period until the head temperature stabilizes (so-called "thermal asperity", a problem which can partially be dealt with by proper electronic filtering of the read signal). When the logic board of a hard disk fails, the drive can often be restored to functioning order and the data recovered by replacing the circuit board with one of an identical hard disk. In the case of read-write head faults, they can be replaced using specialized tools in a dust-free environment. If the disk platters are undamaged, they can be transferred into an identical enclosure and the data can be copied or cloned onto a new drive. In the event of disk-platter failures, disassembly and imaging of the disk platters may be required. For logical damage to file systems, a variety of tools, including fsck on UNIX-like systems and CHKDSK on Windows, can be used for data recovery. Recovery from logical damage can require file carving. A common expectation is that hard disk drives designed and marketed for server use will fail less frequently than consumer-grade drives usually used in desktop computers. However, two independent studies by Carnegie Mellon University and Google found that the "grade" of a drive does not relate to the drive's failure rate. A 2011 summary of research, into SSD and magnetic disk failure patterns by Tom's Hardware summarized research findings as follows:
As of 2019[update], Backblaze, a storage provider, reported an annualized failure rate of two percent per year for a storage farm with 110,000 off-the-shelf HDDs with the reliability varying widely between models and manufacturers. Backblaze subsequently reported in 2021 that the failure rate for HDDs and SSD of equivalent age was similar. To minimize cost and overcome failures of individual HDDs, storage systems providers rely on redundant HDD arrays. HDDs that fail are replaced on an ongoing basis.
Market segmentsConsumer segment
These drives typically spin at 5400 rpm and include:
Enterprise and business segment
EconomyPrice evolutionHDD price per byte decreased at the rate of 40% per year during 1988–1996, 51% per year during 1996–2003 and 34% per year during 2003–2010. The price decrease slowed down to 13% per year during 2011–2014, as areal density increase slowed and the 2011 Thailand floods damaged manufacturing facilities and have held at 11% per year during 2010–2017. The Federal Reserve Board has published a quality-adjusted price index for large-scale enterprise storage systems including three or more enterprise HDDs and associated controllers, racks and cables. Prices for these large-scale storage systems decreased at the rate of 30% per year during 2004–2009 and 22% per year during 2009–2014. Manufacturers and salesMore than 200 companies have manufactured HDDs over time, but consolidations have concentrated production to just three manufacturers today: Western Digital, Seagate, and Toshiba. Production is mainly in the Pacific rim. HDD unit shipments peaked at 651 million units in 2010 and have been declining since then to 166 million units in 2022. Seagate at 43% of units had the largest market share. Competition from SSDsHDDs are being superseded by solid-state drives (SSDs) in markets where the higher speed (up to 7 gigabytes per second for M.2 (NGFF) NVMe drives and 2.5 gigabytes per second for PCIe expansion card drives)), ruggedness, and lower power of SSDs are more important than price, since the bit cost of SSDs is four to nine times higher than HDDs. As of 2016[update], HDDs are reported to have a failure rate of 2–9% per year, while SSDs have fewer failures: 1–3% per year. However, SSDs have more un-correctable data errors than HDDs. SSDs are available in larger capacities (up to 100 TB) than the largest HDD, as well as higher storage densities (100 TB and 30 TB SSDs are housed in 2.5 inch HDD cases with the same height as a 3.5-inch HDD), although such large SSDs are very expensive. A laboratory demonstration of a 1.33 Tb 3D NAND chip with 96 layers (NAND commonly used in solid-state drives (SSDs)) had 5.5 Tbit/in2 as of 2019[update]), while the maximum areal density for HDDs is 1.5 Tbit/in2. The areal density of flash memory is doubling every two years, similar to Moore's law (40% per year) and faster than the 10–20% per year for HDDs. In 2025, the maximum capacity was 36 terabytes for a HDD, and 100 terabytes for an SSD. HDDs were used in 70% of the desktop and notebook computers produced in 2016, and SSDs were used in 30%. In 2025 HDDs are not found in laptops, if not very rarely, and most desktops come with an SSD only, though some still are configured with an SSD and HDD, or rarely with an HDD only.[citation needed] The market for silicon-based flash memory (NAND) chips, used in SSDs and other applications, is growing faster than for HDDs. Worldwide NAND revenue grew 16% per year from $22 billion to $57 billion during 2011–2017, while production grew 45% per year from 19 exabytes to 175 exabytes. See also
Notes
Further reading
wikipedia, wiki, encyclopedia, book, library, article, read, free download, Information about Hard disk drive, What is Hard disk drive? What does Hard disk drive mean? | |||||||||||||||||||||||||||||||||||||