Friday, June 9, 2023
HomeUncategorizedHistory of Storage Media (2017)

History of Storage Media (2017)

Storage is an essential part of every computer architecture, from paper hypothesized in Turing machines to registers and memory cells in von Neumann architectures. Both non-volatile storage media and volatile storage devices have undergone several technological evolutions over the last century, including some delightfully strange mechanisms. This article is a very selective history highlighting some of the earliest technologies that I find particularly interesting:

  1. Punch cards, one of the earliest forms of computer data One, used for weaving patterns and census data!
  2. The Williams-Kilburn tube, the first electronic memory, stores bits as dots on the screen!
  3. Mercury Delay Line, a two-foot tube of hot mercury that stores bits as sound pulses!

For a more comprehensive resource, the Computer History Museum has a detailed timeline with lovely photos. Storage and memory have a long history. While CPU architectures are very diverse, they tend to use the same basic element—switches, in the form of transistors or vacuum tubes, arranged in different configurations. By contrast, the basic components of memory mechanisms vary widely. Each iteration essentially reinvents the wheel.

1. Punch cards

Punch cards aren’t really a strange storage medium, but they are the earliest and have some fascinating historical applications.

1.1. Jacquard Looms

Many of the early ancestors of computers were controlled by punched cards. The ancestor of these machines, in turn, was the Jacqard loom, which automated the production of carefully woven textiles. It was the first machine to use punched cards as an automated sequence of instructions. First – some background information on weaving patterns into fabrics. Most fabrics are woven with two sets of yarns perpendicular to each other of. The first set of threads, the warp threads, are stretched on the loom, and the second set of threads, the weft threads, are threaded above or below the warp threads to form the fabric. This low-level operation is easily represented by binary information. Changing the order up and down results in different kinds of fabrics, from satin to twill to brocade. In the following weave design example, the warp threads are white and the weft threads are blue. To create a twill weave, which is what most denim looks like, the loom pulls the first two threads up, the last two down, the last two, and so on, through the warp. The weft yarn is then passed between the raised and lowered sets of threads, woven in the first row. The loom repeats this process, picking up sets of warp threads alternately and then passing the weft threads through. After a few lines 1, a zigzag pattern unique to twill appears. While weaving can be very complex, this particular weave has two simple operations, repeated according to a very specific pattern. Computers are good at simple and repetitive work! Before the advent of the jacquard loom, this The process of raising and lowering the thread is done by hand on the drawing machine. A warp thread can be about 2,000 threads wide, so weaving a detailed pattern on a fabric requires thousands of decisions to raise or lower thousands of threads. Depending on the complexity of the design, an experienced weaving team can weave several rows per minute, so it would take a day to produce one inch of fabric. For context, a ball gown can require up to four yards of upholstery fabric, which represents nearly four months of weaving. Portrait of jacquard weaving on loom . Around 1803, Joseph-Marie Jacquard began to prototype a loom that could automatically produce upholstery cloth. Jacquard looms weave delicate silk patterns by reading patterns on a series of replaceable punched cards. Cards of control mechanisms can be linked together, so complex patterns of any length can be written. Twill weaves, like the weaves above, are fairly easy and repeatable patterns, and they are a simple example of what an automatic loom can produce. The special strength of the jacquard loom comes from its ability to independently control almost every warp end. 2 This is awesome! The jacquard mechanism produces images with incredible detail. The portrait of Joseph-Marie Jacquard is a stunning display of the intricacies of the loom. Silk textiles are woven from tens of thousands of punched cards. Simply change the weave by changing the card The ability of looms to weave patterns was the conceptual precursor to programmable machines. Arbitrary designs can be woven into fabrics, rather than recurring, unchanging patterns, and the same machine can be used for an infinite set of patterns. This mechanism inspired many other programmable automata. Charles Babbage’s analysis engine borrows this idea. In fact, Babbage himself is rumored to have a woven portrait of Jacquard in his house! Playing the piano uses punch cards (or punch drums) to make music. That said, most of the early uses of punch cards were for basic, repetitive control of machines—simple music encoding or weaving for special use cases. The control language of early automata is limited, and the expressive ability is not very strong. The full expressive power of punched cards was not realized until nearly 100 years later, when they became the tool for all large-scale information processing. 1.2. 1890 US Census In the late 19th century, the US Census Bureau found itself collecting more information than it could manually process. The 1880 census took more than seven years to process, and it is estimated that the 1890 census will take almost twice as long. 3 Spending ten years processing census information means that the Information is out of date almost immediately after it is produced! In response to a growing population, the Census Bureau held a competition to find a more efficient way to count and process data. Herman Hollerith came up with the idea of ​​representing census information on punched cards and produced a tabulating machine that could sort and summarize the data. His design won the competition and was used to calculate the 1890 US Census. The tabulator makes processing faster and provides more stats than in previous years. After the success of the tabulating machine in the 1890 census, other countries began to adopt the technology. Soon, Hollerith’s Tabulating Machine Company produced machines for many other industries. After his death, Computer Tabulating Records changed its name to International Business Machines Corporation, or IBM, ushering in a new era of computing.

Made by Hollerith tab. Shortly after the 1890 census, nearly all information processing was done with punch cards. By the end of World War I, the military used punched cards to store medical and troop data, insurance companies stored actuarial tables on punched cards, and railways used punched cards to track operating expenses. Punched cards were also widely used in enterprise information processing: throughout the 1970s, punched cards were used in scientific computing, human resources departments in large companies, and every use case in between. While the once ubiquitous punch cards have been replaced by other data formats, their echoes are still with us today – the suggested line limit of 80 characters comes from the IBM 80-column punch card format. 2. Williams-Kilburn Tubes punching holes In the card era, the card was mainly used for data, and the control program was input through a complex jumper system. Reprogramming these machines is a multi-person, multi-day job! To speed up the process, the researchers propose an easily replaceable storage mechanism for storing programs and data. two Women linked part of ENIAC with a new program. U.S. Army photo. While it is possible to build computers with mechanical memory, most mechanical memory systems are slow and annoyingly complex. The development of electronic memory for stored-program computing is the next important frontier in computer design. In the late 1940s, researchers at the University of Manchester developed the first electronic computer memory, largely by piecing together leftover radar parts from World War II. With some clever modifications to the cathode ray tube, Frederic Williams and Tom Kilburn built the first stored-program computer in their Manchester laboratory. During World War II, cathode ray tubes (CRTs) became standard in radar systems, driving research into more advanced CRT technologies. Researchers at Bell Labs took advantage of some secondary effects of the CRT, using the CRT itself to store the location of past images. Williams and Kilburn took Bell Labs’ work a step further by adapting the CRT to digital memory.

2.1. Secondary Effects: Principle of Operation

The Williams-Kilburn tube turned a spare part in radar research and some side effects of the CRT tube into the first digital memory. Conventional CRTs display images by emitting electron beams on a phosphor screen. Electrostatic plate or electromagnetic coil

    4 turns the beam to scan the entire screen. The electron beam turns on and off to draw an image on the screen.

      Depending on the type of phosphor on the screen and the energy of the electron beam, the duration of the bright spot varies from microseconds to several seconds. In normal operation, once written, the bright spot on the screen can no longer be detected electronically. However, if the energy of the electron beam is above a certain threshold, the electron beam will knock some electrons out of the phosphor, an effect called secondary emission. The electrons land not far from the bright spot, leaving behind a charged bullseye that persists for a while. So, to write data, a Williams-Kilburn tube uses a high-energy electron beam to charge a spot on the phosphor screen. The memory bits are arranged in a grid on the surface of the tube, like pixels on a screen. To store the data, the beam sweeps across the surface of the tube, turning on and off to represent binary data. The charged areas on the screen are basically small capacitors that are charged.

        When a higher energy electron beam hits the screen, the secondary emission of electrons induces a small voltage in any conductors in its vicinity. If a thin piece of metal is placed in front of the screen of a CRT, the electron beam will knock some of the electrons out of the screen, causing a voltage change in the piece of metal. So, to read the data, the electron beam is again swept across the surface of the tube, but kept on at a lower energy. If a “1” has been written there, the positively charged dots on the screen are neutralized, allowing the capacitor to discharge. Pick up the screen and send a current pulse. If it is ‘0’, no discharge occurs and no pulses are seen by the pick-up board. By recording the pattern of current flowing through the pick-up board, the tube can determine which bits are stored in the registers. 5 Williams The tube is true random access memory – the electron beam can be scanned to any point on the screen and the data can be accessed almost instantly.

          Data stored on the Williams-Kilburn tube, transmitted to the display CRT. Image courtesy of the Computer History Museum. The charge of the area leaks over time, so a refresh/rewrite process is required. Modern DRAM has a similar memory refresh process! Since reads are destructive, each read is followed by a write to flush the data. The data from the Williams-Kilburn tube can be transferred to a conventional CRT picture tube for inspection, which is very useful for debugging. Kilburn also reports that watching the tube flicker was very fascinating when these early machines figured it out. 2.2. Manchester Baby

After the Manchester team had a working memory tube to store 1024 bits, they wanted to test the system in a proof-of-concept computer. They have a tube that manually stores data for slow writes, but they want to make sure the whole system can still work at electron speeds under heavy write loads. They built the small experimental machine (aka the Manchester Baby) as a test bed. This will be the world’s first stored program computer! Babies have four CRTs:

  • a register for storing 32 32-bit RAM words,
  • a register as an accumulator,
  • one to store the program counter and current instruction,
  • to display the output or any other pipe content.
  • The Williams-Kilburn tube is an anomaly Introspective data storage devices. The program inputs one 32-bit word at a time, where each word is either an instruction or data to be manipulated. The text is entered by manually setting a set of 32 switches on or off! The first program to run on Manchester Baby computes the factors of large numbers. Later Turing wrote a program to do long division, because the hardware can only do subtraction and negation. 2.3. Later History Manchester Baby parts were reused for the Manchester Mark 1, a larger, more powerful stored-program computer. The Mark 1 evolved into the Ferranti Mark 1, the first commercial general-purpose computer. Williams-Kilburn tubes were used as RAM and memory in many other early computers. The MANIAC computer at Los Alamos for H-bomb design calculations uses 40 Williams-Kilburn tubes to store 1024 40-digit numbers. Although the tube was in early computer history play an important role, but they are difficult to maintain and operate. They often had to be tweaked manually and were eventually phased out in favor of core memory. 3. Mercury Delay Line In addition to the Williams-Kilburn tube, radar research provided another memory mechanism for early computers—delay line memory. Defensive radar systems of the 1940s used primitive delay lines to remember and filter stationary objects on the ground, such as buildings and utility poles. In this way, the radar system will only show new moving objects. Delay lines are a form of sequential access memory in which data can only be read in a specific order. A vertical drain can be a simple delay line – you push a ball with data written on it to one end, let it fall down the pipe, read it, and throw it back on the other end. Ultimately, you can pipe many of these bits of data. To read a specific bit, you drop the ball and cycle through it until you reach the bit you want. Sequential visit! The most common form of delay line in early computers was the mercury delay line. This is essentially a two-foot-long mercury-filled tube with a speaker on one end and a microphone on the other (actually, these are the same piezoelectric crystals). To write a little, the speaker sends a pulse through the tube. The pulse will travel down the tube in half a millisecond, where it will be read by the microphone. To save the bits, the speaker will retransmit the bits just read back through the tube. As with the drain example, to read a specific bit, the circuit must wait for the pulses it wants to cycle through the system. UNIVAC’s Mercury delay line. Mercury was chosen because its acoustic impedance at 40ºC (104ºF) closely matches the piezoelectric crystal used as the sensor. This reduces echoes that can interfere with the data signal. At this temperature, the speed of sound in mercury is about four times higher than in air, so it takes about 420 microseconds for a drill to pass through a 2-foot-long tube. Since the computer’s clock needs to be precisely aligned with the memory cycles, keeping the tube exactly at 40ºC is critical. First set of mercury delay lines in EDSAC, scaled with Maurice Wilkes. While the Manchester Baby was the first computer to store a program, it was just a proof of concept. The EDVAC, built for the U.S. Army, was the first stored-program computer in practical use. John von Neumann’s report on EDVAC inspired the design of many other stored program computers. So a mercury delay line is a giant tube of liquid mercury, placed in a 40ºC oven, that loops a memory clip in the form of a sound wave! Despite their clumsiness, mercury delay lines were used in many other early computers. The EDSAC, inspired by the von Neumann report, was the first computer to be used in computational biology and to develop the first video game (a version of tic-tac-toe with a CRT monitor). The UNIVAC I, the first commercial computer, also used a mercury delay line. In 1951, the US Census Bureau installed the first UNIVAC, replacing the IBM punch card machine. It was finally decommissioned in 1963. and more! This is just the story of the first few memory mechanisms on the market. In the decades since the delay line, other unique mechanisms have also been developed. Before long, core memories—ferrite doughnuts woven into mats of copper wire—become ubiquitous. There was a read-only version of core memory on the Apollo navigation computer that brought astronauts to the moon. Until a few years ago, nearly every computer had several spinning disks, coated with a thin layer of iron, carefully magnetized by a needle hovering over a cushion of air. For years, we’ve passed data and software on plastic disks covered in tiny pits, read by bouncing laser light off the surface. More advanced disks can store up to 25 GB of data and use blue lasers instead. Computers are incredible Rube Goldberg machines – every layer is made of fractal complexity, carefully hidden behind layers of abstraction. Examining the history of the technology can give us insight into why the interface looks the way it does now, and why we use the terminology we do. The early history of computing produced a fascinating and complex array of storage devices. Each system is a well-designed machine that has left its mark on the computers of the future. of this article Part of the content originally appeared on Kiran’s blog.



    Please enter your comment!
    Please enter your name here


    Featured NEWS